Dec 03 17:39:26 crc systemd[1]: Starting Kubernetes Kubelet... Dec 03 17:39:26 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:26 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 03 17:39:27 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 17:39:27 crc kubenswrapper[4687]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.232840 4687 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236407 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236428 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236434 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236440 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236446 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236451 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236458 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236465 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236471 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236476 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236482 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236486 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236491 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236496 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236510 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236515 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236520 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236524 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236530 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236535 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236540 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236547 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236554 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236559 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236566 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236573 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236578 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236584 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236589 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236597 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236602 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236607 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236612 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236617 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236622 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236627 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236631 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236637 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236642 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236647 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236652 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236657 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236662 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236667 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236673 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236679 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236685 4687 feature_gate.go:330] unrecognized feature gate: Example Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236690 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236695 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236699 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236705 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236710 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236715 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236719 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236724 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236729 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236734 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236738 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236743 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236750 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236754 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236760 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236764 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236769 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236774 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236778 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236783 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236788 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236793 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236799 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.236804 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237108 4687 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237140 4687 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237151 4687 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237158 4687 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237165 4687 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237171 4687 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237178 4687 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237185 4687 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237191 4687 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237197 4687 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237203 4687 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237209 4687 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237215 4687 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237220 4687 flags.go:64] FLAG: --cgroup-root="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237225 4687 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237232 4687 flags.go:64] FLAG: --client-ca-file="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237237 4687 flags.go:64] FLAG: --cloud-config="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237243 4687 flags.go:64] FLAG: --cloud-provider="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237248 4687 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237254 4687 flags.go:64] FLAG: --cluster-domain="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237260 4687 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237266 4687 flags.go:64] FLAG: --config-dir="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237271 4687 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237277 4687 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237284 4687 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237290 4687 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237296 4687 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237302 4687 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237308 4687 flags.go:64] FLAG: --contention-profiling="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237313 4687 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237319 4687 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237326 4687 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237332 4687 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237339 4687 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237344 4687 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237350 4687 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237355 4687 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237361 4687 flags.go:64] FLAG: --enable-server="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237366 4687 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237373 4687 flags.go:64] FLAG: --event-burst="100" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237379 4687 flags.go:64] FLAG: --event-qps="50" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237385 4687 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237390 4687 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237396 4687 flags.go:64] FLAG: --eviction-hard="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237403 4687 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237408 4687 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237414 4687 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237419 4687 flags.go:64] FLAG: --eviction-soft="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237425 4687 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237431 4687 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237436 4687 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237442 4687 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237448 4687 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237453 4687 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237459 4687 flags.go:64] FLAG: --feature-gates="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237466 4687 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237471 4687 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237477 4687 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237484 4687 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237489 4687 flags.go:64] FLAG: --healthz-port="10248" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237495 4687 flags.go:64] FLAG: --help="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237501 4687 flags.go:64] FLAG: --hostname-override="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237507 4687 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237513 4687 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237518 4687 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237524 4687 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237529 4687 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237534 4687 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237540 4687 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237545 4687 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237551 4687 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237556 4687 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237562 4687 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237568 4687 flags.go:64] FLAG: --kube-reserved="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237573 4687 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237579 4687 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237584 4687 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237590 4687 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237595 4687 flags.go:64] FLAG: --lock-file="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237601 4687 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237606 4687 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237612 4687 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237620 4687 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237625 4687 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237631 4687 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237637 4687 flags.go:64] FLAG: --logging-format="text" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237642 4687 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237648 4687 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237654 4687 flags.go:64] FLAG: --manifest-url="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237659 4687 flags.go:64] FLAG: --manifest-url-header="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237666 4687 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237672 4687 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237679 4687 flags.go:64] FLAG: --max-pods="110" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237685 4687 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237716 4687 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237722 4687 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237727 4687 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237732 4687 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237738 4687 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237744 4687 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237756 4687 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237761 4687 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237767 4687 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237772 4687 flags.go:64] FLAG: --pod-cidr="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237778 4687 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237787 4687 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237792 4687 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237797 4687 flags.go:64] FLAG: --pods-per-core="0" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237803 4687 flags.go:64] FLAG: --port="10250" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237809 4687 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237815 4687 flags.go:64] FLAG: --provider-id="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237821 4687 flags.go:64] FLAG: --qos-reserved="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237827 4687 flags.go:64] FLAG: --read-only-port="10255" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237832 4687 flags.go:64] FLAG: --register-node="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237838 4687 flags.go:64] FLAG: --register-schedulable="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237843 4687 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237852 4687 flags.go:64] FLAG: --registry-burst="10" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237858 4687 flags.go:64] FLAG: --registry-qps="5" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237863 4687 flags.go:64] FLAG: --reserved-cpus="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237869 4687 flags.go:64] FLAG: --reserved-memory="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237876 4687 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237882 4687 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237887 4687 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237893 4687 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237899 4687 flags.go:64] FLAG: --runonce="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237905 4687 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237911 4687 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237917 4687 flags.go:64] FLAG: --seccomp-default="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237922 4687 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237927 4687 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237932 4687 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237936 4687 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237940 4687 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237944 4687 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237948 4687 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237952 4687 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237956 4687 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237960 4687 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237964 4687 flags.go:64] FLAG: --system-cgroups="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237968 4687 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237974 4687 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237978 4687 flags.go:64] FLAG: --tls-cert-file="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237982 4687 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237987 4687 flags.go:64] FLAG: --tls-min-version="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237991 4687 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237994 4687 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.237999 4687 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238004 4687 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238008 4687 flags.go:64] FLAG: --v="2" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238013 4687 flags.go:64] FLAG: --version="false" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238019 4687 flags.go:64] FLAG: --vmodule="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238024 4687 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238029 4687 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238143 4687 feature_gate.go:330] unrecognized feature gate: Example Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238148 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238153 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238157 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238161 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238165 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238169 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238173 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238176 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238180 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238184 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238187 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238191 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238194 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238198 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238202 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238208 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238213 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238217 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238221 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238227 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238231 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238235 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238239 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238243 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238248 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238252 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238255 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238259 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238263 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238267 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238270 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238274 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238277 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238281 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238286 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238290 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238295 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238299 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238302 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238306 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238309 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238313 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238317 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238321 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238324 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238328 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238331 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238338 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238342 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238345 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238349 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238354 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238357 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238361 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238364 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238368 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238371 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238375 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238378 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238381 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238385 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238388 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238392 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238395 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238399 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238402 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238406 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238409 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238412 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.238416 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.238574 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.250870 4687 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.250928 4687 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251075 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251089 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251098 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251110 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251143 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251152 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251160 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251170 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251179 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251187 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251195 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251203 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251210 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251222 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251230 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251238 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251245 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251253 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251261 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251269 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251277 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251287 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251299 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251309 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251318 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251326 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251335 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251344 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251352 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251360 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251368 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251377 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251385 4687 feature_gate.go:330] unrecognized feature gate: Example Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251395 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251403 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251411 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251420 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251427 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251435 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251442 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251450 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251458 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251466 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251475 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251482 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251491 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251498 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251506 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251514 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251524 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251535 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251544 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251554 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251565 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251574 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251584 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251592 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251600 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251608 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251616 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251624 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251636 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251646 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251659 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251669 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251679 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251688 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251697 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251706 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251716 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251723 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.251738 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251976 4687 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251989 4687 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.251997 4687 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252006 4687 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252014 4687 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252022 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252030 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252038 4687 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252046 4687 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252054 4687 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252061 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252070 4687 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252077 4687 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252085 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252093 4687 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252101 4687 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252108 4687 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252116 4687 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252151 4687 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252162 4687 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252171 4687 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252180 4687 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252189 4687 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252198 4687 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252209 4687 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252219 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252228 4687 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252237 4687 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252246 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252256 4687 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252266 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252276 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252285 4687 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.252404 4687 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253020 4687 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253046 4687 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253060 4687 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253071 4687 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253081 4687 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253096 4687 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253111 4687 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253156 4687 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253167 4687 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253178 4687 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253189 4687 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253201 4687 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253223 4687 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253234 4687 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253244 4687 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253253 4687 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253264 4687 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253274 4687 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253283 4687 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253294 4687 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253304 4687 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253314 4687 feature_gate.go:330] unrecognized feature gate: Example Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253325 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253336 4687 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253354 4687 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253364 4687 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253379 4687 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253393 4687 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253405 4687 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253417 4687 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253429 4687 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253440 4687 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253451 4687 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253462 4687 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253473 4687 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253572 4687 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.253594 4687 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.253611 4687 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.254685 4687 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.258190 4687 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.258343 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.259182 4687 server.go:997] "Starting client certificate rotation" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.259209 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.259353 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-07 23:06:35.865556898 +0000 UTC Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.259446 4687 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 845h27m8.606114202s for next certificate rotation Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.265091 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.267034 4687 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.283351 4687 log.go:25] "Validated CRI v1 runtime API" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.303454 4687 log.go:25] "Validated CRI v1 image API" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.305659 4687 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.309272 4687 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-03-17-34-54-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.309308 4687 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.323734 4687 manager.go:217] Machine: {Timestamp:2025-12-03 17:39:27.322386999 +0000 UTC m=+0.213082452 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:07bf91f7-6553-4869-9d97-b90a2ed5644f BootID:ee1562dd-e220-43f1-83b5-a41fc656114f Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:70:51:a8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:70:51:a8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:95:5b:3a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2d:cf:3b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1b:15:1d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:73:01:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ba:18:bb:ff:ee:83 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:b2:e2:45:c0:7e:01 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.323965 4687 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.324189 4687 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.324778 4687 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.324960 4687 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.325001 4687 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.325234 4687 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.327365 4687 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.327858 4687 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.327950 4687 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.328954 4687 state_mem.go:36] "Initialized new in-memory state store" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.329159 4687 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.330019 4687 kubelet.go:418] "Attempting to sync node with API server" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.330054 4687 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.330101 4687 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.330144 4687 kubelet.go:324] "Adding apiserver pod source" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.330164 4687 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.332735 4687 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.333336 4687 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.333993 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.334086 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.334084 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.334200 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.334338 4687 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335011 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335049 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335062 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335073 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335090 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335137 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335149 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335165 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335194 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335211 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335235 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335247 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.335485 4687 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.336216 4687 server.go:1280] "Started kubelet" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.336429 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.336974 4687 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.337000 4687 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.338356 4687 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 17:39:27 crc systemd[1]: Started Kubernetes Kubelet. Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.339331 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187dc54de484f15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 17:39:27.336173919 +0000 UTC m=+0.226869352,LastTimestamp:2025-12-03 17:39:27.336173919 +0000 UTC m=+0.226869352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.339925 4687 server.go:460] "Adding debug handlers to kubelet server" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340411 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340482 4687 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340629 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:39:34.40909581 +0000 UTC Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.340758 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340828 4687 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340846 4687 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.340916 4687 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.341447 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.341529 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.341459 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342160 4687 factory.go:55] Registering systemd factory Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342188 4687 factory.go:221] Registration of the systemd container factory successfully Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342518 4687 factory.go:153] Registering CRI-O factory Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342544 4687 factory.go:221] Registration of the crio container factory successfully Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342628 4687 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342654 4687 factory.go:103] Registering Raw factory Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.342674 4687 manager.go:1196] Started watching for new ooms in manager Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.343585 4687 manager.go:319] Starting recovery of all containers Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.356899 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357094 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357116 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357154 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357173 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357189 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357206 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357224 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357243 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357256 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357272 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357290 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357309 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357332 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357351 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357370 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357389 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357406 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357423 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357438 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357455 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357471 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357486 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357504 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357519 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357535 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357599 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357625 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357646 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357665 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357686 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357703 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357720 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357740 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357757 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357775 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357792 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357809 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357824 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357841 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357858 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357876 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357891 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357909 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357926 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357943 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357959 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357976 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.357991 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358008 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358023 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358044 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358103 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358144 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358161 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358176 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358192 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358209 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358227 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358244 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358261 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358280 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358298 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358317 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358334 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358351 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358369 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358384 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358403 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358419 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358440 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358459 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358477 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358494 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358511 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358527 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358545 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358561 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358578 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358593 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358612 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358627 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.358642 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.359942 4687 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360020 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360046 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360065 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360081 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360098 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360115 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360162 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360180 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360196 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360211 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360225 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360239 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360256 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360269 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360282 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360302 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360326 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360342 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360359 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360381 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360398 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360452 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360473 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360496 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360519 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360538 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360559 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360578 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360598 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360615 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360632 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360648 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360679 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360695 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360711 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360726 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360743 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360762 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360777 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360798 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360816 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360833 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360860 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360903 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360925 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360945 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360960 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.360979 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361002 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361024 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361091 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361108 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361146 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361163 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361177 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361195 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361215 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361235 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361253 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361269 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361286 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361303 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361320 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361341 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361361 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361382 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361426 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361440 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361454 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361467 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361480 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361493 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361507 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361522 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361538 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361585 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361599 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361612 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361625 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361638 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361659 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361672 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361685 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361698 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361714 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361729 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361745 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361761 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361776 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361788 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361804 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361817 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361830 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361842 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361861 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361877 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361896 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361916 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361932 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361950 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361967 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.361995 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362013 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362032 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362047 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362062 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362077 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362092 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362111 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362153 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362169 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362184 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362201 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362221 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362236 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362251 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362270 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362287 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362302 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362316 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362345 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362367 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362390 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362419 4687 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362439 4687 reconstruct.go:97] "Volume reconstruction finished" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.362460 4687 reconciler.go:26] "Reconciler: start to sync state" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.370492 4687 manager.go:324] Recovery completed Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.379053 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.382980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.383038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.383051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.386718 4687 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.386752 4687 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.386786 4687 state_mem.go:36] "Initialized new in-memory state store" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.395352 4687 policy_none.go:49] "None policy: Start" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.397482 4687 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.397556 4687 state_mem.go:35] "Initializing new in-memory state store" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.399331 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.405930 4687 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.405997 4687 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.406027 4687 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.406092 4687 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.406678 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.406741 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.441046 4687 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.469657 4687 manager.go:334] "Starting Device Plugin manager" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.469738 4687 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.469752 4687 server.go:79] "Starting device plugin registration server" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.470205 4687 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.470227 4687 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.470608 4687 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.470807 4687 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.470884 4687 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.478038 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.507010 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.507279 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.508734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.508769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.508778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.508943 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509296 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509351 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.509961 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510094 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510144 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510341 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.510903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511151 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511238 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511263 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511577 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.511589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512440 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512645 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.512690 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513339 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513712 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.513892 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.514904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.514951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.514970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.542670 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.566365 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.566692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.566808 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.566920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567133 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567361 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567449 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567557 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567770 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567868 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.567965 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.568060 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.570942 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.572788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.572839 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.572883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.572927 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.573631 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.640051 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187dc54de484f15f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 17:39:27.336173919 +0000 UTC m=+0.226869352,LastTimestamp:2025-12-03 17:39:27.336173919 +0000 UTC m=+0.226869352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670630 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670651 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670702 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670728 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671014 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671055 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670780 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670711 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671106 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671158 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.670712 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671275 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671423 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671438 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.671514 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.773808 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.775508 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.775610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.775630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.775689 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.776651 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.852962 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.865576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.878430 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e45c3b9c15df964ceca7875a581b1ac0f47526a8a3f472650e0099885005563b WatchSource:0}: Error finding container e45c3b9c15df964ceca7875a581b1ac0f47526a8a3f472650e0099885005563b: Status 404 returned error can't find the container with id e45c3b9c15df964ceca7875a581b1ac0f47526a8a3f472650e0099885005563b Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.882699 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.884185 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-454eb22d9259959c2bb1a05e3f38e24ca08ed58aa83ab1ee751fd0f4735cae4c WatchSource:0}: Error finding container 454eb22d9259959c2bb1a05e3f38e24ca08ed58aa83ab1ee751fd0f4735cae4c: Status 404 returned error can't find the container with id 454eb22d9259959c2bb1a05e3f38e24ca08ed58aa83ab1ee751fd0f4735cae4c Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.889328 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.895480 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3bf2594e697e76888d9051fa31afc547601979a4af88a33d860eadf43198e124 WatchSource:0}: Error finding container 3bf2594e697e76888d9051fa31afc547601979a4af88a33d860eadf43198e124: Status 404 returned error can't find the container with id 3bf2594e697e76888d9051fa31afc547601979a4af88a33d860eadf43198e124 Dec 03 17:39:27 crc kubenswrapper[4687]: I1203 17:39:27.912332 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.925539 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-1dab499db3c34b55f9337d5a6a22d19bc879c07865bf9477fb2163ad47fd8c1d WatchSource:0}: Error finding container 1dab499db3c34b55f9337d5a6a22d19bc879c07865bf9477fb2163ad47fd8c1d: Status 404 returned error can't find the container with id 1dab499db3c34b55f9337d5a6a22d19bc879c07865bf9477fb2163ad47fd8c1d Dec 03 17:39:27 crc kubenswrapper[4687]: W1203 17:39:27.938700 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e2d6671b9155043f5c6229139041bbb15c1a0b38e0758bee9e90fac4b1e47555 WatchSource:0}: Error finding container e2d6671b9155043f5c6229139041bbb15c1a0b38e0758bee9e90fac4b1e47555: Status 404 returned error can't find the container with id e2d6671b9155043f5c6229139041bbb15c1a0b38e0758bee9e90fac4b1e47555 Dec 03 17:39:27 crc kubenswrapper[4687]: E1203 17:39:27.943322 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 03 17:39:28 crc kubenswrapper[4687]: W1203 17:39:28.156862 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.156963 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.177230 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.179378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.179423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.179435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.179467 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.179891 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.337792 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.340743 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 01:13:11.685052374 +0000 UTC Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.340817 4687 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 799h33m43.344238343s for next certificate rotation Dec 03 17:39:28 crc kubenswrapper[4687]: W1203 17:39:28.372691 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.372913 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.412059 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46" exitCode=0 Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.412171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.412266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3bf2594e697e76888d9051fa31afc547601979a4af88a33d860eadf43198e124"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.412356 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.413486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.413533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.413547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.414670 4687 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180" exitCode=0 Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.414748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.414769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"454eb22d9259959c2bb1a05e3f38e24ca08ed58aa83ab1ee751fd0f4735cae4c"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.414855 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.416141 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.416188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e45c3b9c15df964ceca7875a581b1ac0f47526a8a3f472650e0099885005563b"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.417068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.417097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.417137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418156 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418159 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087" exitCode=0 Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418280 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418337 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e2d6671b9155043f5c6229139041bbb15c1a0b38e0758bee9e90fac4b1e47555"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.418937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.420321 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced" exitCode=0 Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.420350 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.420364 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1dab499db3c34b55f9337d5a6a22d19bc879c07865bf9477fb2163ad47fd8c1d"} Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.420454 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.420903 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.421518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: W1203 17:39:28.641920 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.642052 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.744608 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 03 17:39:28 crc kubenswrapper[4687]: W1203 17:39:28.886220 4687 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.130:6443: connect: connection refused Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.886296 4687 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.130:6443: connect: connection refused" logger="UnhandledError" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.981200 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.982389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.982442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.982479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:28 crc kubenswrapper[4687]: I1203 17:39:28.982507 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:28 crc kubenswrapper[4687]: E1203 17:39:28.982927 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.130:6443: connect: connection refused" node="crc" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.423482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.423528 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.423542 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.423620 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.424324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.424345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.424352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.425995 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426026 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426078 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426582 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.426608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.429793 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.429819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.429831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.429842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431163 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0" exitCode=0 Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431291 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431806 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.431831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.433223 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ec1b23ac891a3309f9be744c6c6414a34089909552a015c66f530fb14fbe5646"} Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.433311 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.435071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.435092 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:29 crc kubenswrapper[4687]: I1203 17:39:29.435102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.229468 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.438555 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada"} Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.438639 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.441535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.441599 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.441626 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.444024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f"} Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.444058 4687 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f" exitCode=0 Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.444672 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.444721 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.444800 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445676 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445910 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.445963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.446163 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.446184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.583767 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.584780 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.584828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.584843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:30 crc kubenswrapper[4687]: I1203 17:39:30.584874 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.136316 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.451784 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee"} Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.451846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94"} Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.451867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d"} Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.451883 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087"} Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.451895 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.452798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.452829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.452843 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.781609 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.781759 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.783219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.783261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.783270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:31 crc kubenswrapper[4687]: I1203 17:39:31.788063 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.461551 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.462642 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.462782 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5"} Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.462850 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.462888 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.463725 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.463789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.463811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.464799 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:32 crc kubenswrapper[4687]: I1203 17:39:32.584208 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.464704 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.464764 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.464887 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466839 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466860 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:33 crc kubenswrapper[4687]: I1203 17:39:33.466911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.012761 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.012972 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.014959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.015018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.015035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.413293 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.413540 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.415714 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.415752 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.415762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.987007 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.987277 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.989574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.989628 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:35 crc kubenswrapper[4687]: I1203 17:39:35.989644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:37 crc kubenswrapper[4687]: I1203 17:39:37.316443 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:37 crc kubenswrapper[4687]: I1203 17:39:37.316958 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:37 crc kubenswrapper[4687]: I1203 17:39:37.318771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:37 crc kubenswrapper[4687]: I1203 17:39:37.318833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:37 crc kubenswrapper[4687]: I1203 17:39:37.318854 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:37 crc kubenswrapper[4687]: E1203 17:39:37.478523 4687 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.154599 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.154968 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.157605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.157680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.157701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.338919 4687 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.934436 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.934629 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.935963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.935996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:39 crc kubenswrapper[4687]: I1203 17:39:39.936008 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.260015 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.260104 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.269242 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.269369 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.316514 4687 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 17:39:40 crc kubenswrapper[4687]: I1203 17:39:40.316586 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.592970 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.593162 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.594888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.594968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.594982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:42 crc kubenswrapper[4687]: I1203 17:39:42.598965 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:43 crc kubenswrapper[4687]: I1203 17:39:43.497162 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:39:43 crc kubenswrapper[4687]: I1203 17:39:43.497233 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:43 crc kubenswrapper[4687]: I1203 17:39:43.498115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:43 crc kubenswrapper[4687]: I1203 17:39:43.498281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:43 crc kubenswrapper[4687]: I1203 17:39:43.498355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.260110 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.263697 4687 trace.go:236] Trace[2146544554]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 17:39:31.383) (total time: 13879ms): Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[2146544554]: ---"Objects listed" error: 13879ms (17:39:45.263) Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[2146544554]: [13.879881197s] [13.879881197s] END Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.263952 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.263859 4687 trace.go:236] Trace[534467081]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 17:39:30.849) (total time: 14414ms): Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[534467081]: ---"Objects listed" error: 14414ms (17:39:45.263) Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[534467081]: [14.414229448s] [14.414229448s] END Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.264105 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.265222 4687 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.265625 4687 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.265670 4687 trace.go:236] Trace[409658214]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 17:39:30.673) (total time: 14591ms): Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[409658214]: ---"Objects listed" error: 14591ms (17:39:45.265) Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[409658214]: [14.591907488s] [14.591907488s] END Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.265877 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.265746 4687 trace.go:236] Trace[485335740]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 17:39:30.563) (total time: 14702ms): Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[485335740]: ---"Objects listed" error: 14702ms (17:39:45.265) Dec 03 17:39:45 crc kubenswrapper[4687]: Trace[485335740]: [14.70211415s] [14.70211415s] END Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.266018 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.325754 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39712->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.325802 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39728->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.325854 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39712->192.168.126.11:17697: read: connection reset by peer" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.325866 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39728->192.168.126.11:17697: read: connection reset by peer" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.326259 4687 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.326296 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.340264 4687 apiserver.go:52] "Watching apiserver" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.343065 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.343408 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.343825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.343947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.344051 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.344286 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.344339 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.344429 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.344474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.344529 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.344748 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.350714 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.350927 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.351223 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.351547 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.351704 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.351796 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.352163 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.352703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.351112 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.381201 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.394214 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.419219 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.432440 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.441592 4687 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.444699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.454932 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.465661 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.466931 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.466965 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.466985 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467061 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467555 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467590 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467621 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467644 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467678 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467662 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467694 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467796 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467829 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467864 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467889 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467914 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467940 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.467999 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468374 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468466 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468597 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468625 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468780 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.468926 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469379 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469414 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469438 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469460 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469521 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469533 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469565 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469606 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469663 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469694 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469721 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469749 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469749 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469776 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469807 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469833 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469863 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469899 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469918 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.469939 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471219 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471256 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471301 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471341 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471396 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471484 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471482 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471520 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471565 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471599 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471629 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471825 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471851 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471884 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471915 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471941 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.471974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472053 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472154 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472216 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472293 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472340 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472464 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472497 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472535 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472571 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472651 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472723 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472897 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472943 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.472979 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473019 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473057 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473090 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473142 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473178 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473292 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473321 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473357 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473395 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473429 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473457 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473549 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473584 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473656 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473684 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473753 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473787 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473824 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473921 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473959 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.473990 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474018 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474050 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474084 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474112 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474167 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474199 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474276 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474354 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474383 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474458 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474494 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474558 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474591 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474629 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474673 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474700 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474773 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474814 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474847 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.474988 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475168 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475201 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475232 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475479 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475506 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475578 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475648 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475711 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475757 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475781 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475846 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475875 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.475947 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476005 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476040 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476094 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476207 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476490 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476529 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476554 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476850 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476890 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.476945 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477015 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477065 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477241 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477268 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477337 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477370 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477474 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.477993 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.478136 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.478649 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479746 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479808 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.478663 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479040 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479045 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479076 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.480455 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.480867 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.480874 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.478897 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.480960 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.479537 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.478754 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.481337 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.481446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.481499 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.481622 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.481838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482094 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482178 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482437 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.483281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.483640 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.484335 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.484392 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.483898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.484283 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482431 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485541 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485624 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485726 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482562 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485917 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.482616 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.485956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486428 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486590 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486674 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.486912 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487023 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487041 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487845 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489310 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489389 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489416 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489439 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489464 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489531 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489551 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489574 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489605 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489765 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489894 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489936 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491378 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491395 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491411 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491423 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491436 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491451 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491464 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491513 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491527 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491540 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491551 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491583 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491594 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491607 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491619 4687 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491631 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491645 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491659 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491670 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491681 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491693 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491726 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491737 4687 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491763 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491776 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491787 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491799 4687 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491814 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491826 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491837 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491849 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491858 4687 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491868 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491879 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491889 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491920 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491929 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491940 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491950 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491959 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491970 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491984 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491997 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492007 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492017 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492027 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492038 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492053 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492065 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492075 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492086 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492097 4687 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492110 4687 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492138 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492155 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492165 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492184 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492196 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492207 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492218 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492228 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492238 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492262 4687 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492274 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492284 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492295 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492306 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492317 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492328 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487854 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.487871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.488022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.488136 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.488797 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.488849 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489010 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.495606 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.489783 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.490137 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.490307 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.490205 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.490506 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.490574 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491080 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491148 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491427 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491563 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491648 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491825 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.491890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492314 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492622 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492686 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492998 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.492647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.495902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493165 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493224 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493552 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493601 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493636 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.493891 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.494209 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.494674 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.494819 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.494844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.494962 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.495048 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.495225 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.495727 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.496199 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.496380 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.496466 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.496497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.497160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.497342 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.497400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.497817 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.497852 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:39:45.997405914 +0000 UTC m=+18.888101537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.497946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498106 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498165 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498208 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498187 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498244 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498333 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498285 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.498401 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.498883 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.499048 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:45.999022838 +0000 UTC m=+18.889718271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499202 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499417 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499882 4687 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500083 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.500183 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.500270 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:46.000248491 +0000 UTC m=+18.890943924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500344 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.499829 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500593 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500731 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.500976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.501307 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.501677 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.502281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.503269 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.506815 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.509994 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada" exitCode=255 Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.510067 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada"} Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.511390 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.511682 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.511767 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.511939 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:46.011910717 +0000 UTC m=+18.902606360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.514061 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.514475 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.514577 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.514659 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:45 crc kubenswrapper[4687]: E1203 17:39:45.514822 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:46.014796365 +0000 UTC m=+18.905491998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.518864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.518877 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519000 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519088 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519254 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519633 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.519667 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.521909 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.521567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.522910 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.523235 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.524341 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.524550 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.526895 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527368 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527388 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527514 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527455 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527714 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.527892 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.528036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.529198 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.529434 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.529634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.529676 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.530275 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.532292 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.532689 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.532728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.532642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.533609 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.533654 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.534790 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.535044 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.535270 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.535369 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.535507 4687 scope.go:117] "RemoveContainer" containerID="bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.535587 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.536060 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.536357 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.536545 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.536894 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.537564 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.539017 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.540398 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.540583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.540890 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.546965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.549520 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.559108 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.562941 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.597736 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600771 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600788 4687 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600797 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600805 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600822 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600831 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600839 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600849 4687 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600857 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600872 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600880 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600888 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600896 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600905 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600913 4687 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600921 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600931 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600939 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600947 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600955 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600963 4687 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600971 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600979 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600987 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.600996 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601026 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601034 4687 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601043 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601051 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601061 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601069 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601077 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601087 4687 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601095 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601103 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601110 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601132 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601140 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601148 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601163 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601172 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601180 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601194 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601203 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601230 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601238 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601247 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601263 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601270 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601278 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601298 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601306 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601319 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601336 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601349 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601359 4687 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601367 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601375 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601383 4687 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601391 4687 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601399 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601434 4687 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601445 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601453 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601461 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601469 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601477 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601485 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601493 4687 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601501 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601509 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601517 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601537 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601545 4687 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601565 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601573 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601581 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601589 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601597 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601618 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601626 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601644 4687 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601651 4687 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601661 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601689 4687 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601697 4687 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601704 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601723 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601731 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601739 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601747 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601755 4687 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601763 4687 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601771 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601780 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601831 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601840 4687 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601871 4687 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601878 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601887 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601896 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601904 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601913 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601921 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601929 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601942 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601950 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601958 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601966 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601974 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601982 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601989 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.601997 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602005 4687 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602014 4687 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602022 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602029 4687 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602037 4687 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602045 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602052 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602060 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.602071 4687 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.603370 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.603448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.614172 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.615733 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.625551 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.636536 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.639920 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.671426 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.679655 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.691256 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.702365 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.702397 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: I1203 17:39:45.702409 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 17:39:45 crc kubenswrapper[4687]: W1203 17:39:45.709644 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f269ed0d16008c0aa9945c4a5b870c56d95841c29670ba3ba40d25a56a2dc326 WatchSource:0}: Error finding container f269ed0d16008c0aa9945c4a5b870c56d95841c29670ba3ba40d25a56a2dc326: Status 404 returned error can't find the container with id f269ed0d16008c0aa9945c4a5b870c56d95841c29670ba3ba40d25a56a2dc326 Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.006857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.007037 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:39:47.007007856 +0000 UTC m=+19.897703329 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.007408 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.007450 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.007635 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.007757 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:47.007708695 +0000 UTC m=+19.898404148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.008234 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.008314 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:47.008298141 +0000 UTC m=+19.898993614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.108672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.108749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.108901 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.108950 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.108966 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.108902 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.109040 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:47.109016649 +0000 UTC m=+19.999712082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.109048 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.109066 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.109114 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:47.109098591 +0000 UTC m=+19.999794044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.406306 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:46 crc kubenswrapper[4687]: E1203 17:39:46.406482 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.514792 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.516984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.517364 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.517928 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f8c31c4dfd5d5702c93961e32d99e670f0a083d04c7a8b538d20767579211001"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.519883 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.519906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.519918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f269ed0d16008c0aa9945c4a5b870c56d95841c29670ba3ba40d25a56a2dc326"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.521163 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.521190 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"477ec5c17f6488a99d67b099eae555c553f394223c2c412d59a59a6eccb56222"} Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.532352 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.542489 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.554197 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.564027 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.575781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.594501 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.608204 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.623890 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.635564 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.647059 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.657427 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.667845 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.678700 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:46 crc kubenswrapper[4687]: I1203 17:39:46.688039 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.016227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.016337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.016382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.016533 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.016615 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:39:49.016572399 +0000 UTC m=+21.907267862 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.016638 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.016677 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:49.016664481 +0000 UTC m=+21.907359924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.016786 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:49.016755744 +0000 UTC m=+21.907451337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.117276 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.117338 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117458 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117473 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117487 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117538 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:49.117521013 +0000 UTC m=+22.008216446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117596 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117605 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117613 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.117634 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:49.117627496 +0000 UTC m=+22.008322929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.325200 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.331833 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.343967 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.346405 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.367993 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.387307 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.402249 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.406466 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.406536 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.406587 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:47 crc kubenswrapper[4687]: E1203 17:39:47.406707 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.411600 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.412372 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.414264 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.415104 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.416466 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.417168 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.417098 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.417942 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.419313 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.420153 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.424064 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.424862 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.426202 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.426910 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.427707 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.428515 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.429298 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.430076 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.430703 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.432762 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.433511 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.433637 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.434257 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.434882 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.435320 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.435981 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.437722 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.438443 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.439650 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.440330 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.441903 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.442734 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.443833 4687 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.443981 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.446291 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.446911 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.451377 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.453088 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.453060 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.453969 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.454936 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.455572 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.456687 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.457371 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.458438 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.459165 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.460100 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.460611 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.461545 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.462090 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.463292 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.463769 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.464658 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.465173 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.466066 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.466690 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.467274 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.467329 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.479919 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.497482 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.511184 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.527943 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.543326 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.558449 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.572384 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.587273 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.598322 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.608692 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.618570 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.626813 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.637952 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.650109 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:47 crc kubenswrapper[4687]: I1203 17:39:47.664291 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.263200 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kbjvs"] Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.263795 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gz2wq"] Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.263958 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.264026 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-hhb6c"] Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.264748 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.264761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.265726 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.269461 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.270080 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.275703 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.275783 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.275815 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.276325 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.276364 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.276389 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.276401 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.276961 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.277457 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.280448 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.296343 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.314998 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332250 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-system-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs274\" (UniqueName: \"kubernetes.io/projected/b2458ef0-c3e4-4bb4-9698-92445412cca7-kube-api-access-cs274\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332384 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-k8s-cni-cncf-io\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6mz\" (UniqueName: \"kubernetes.io/projected/fab93456-303f-4c39-93a9-f52dcab12ac1-kube-api-access-dw6mz\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-multus-daemon-config\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-multus-certs\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332473 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fab93456-303f-4c39-93a9-f52dcab12ac1-rootfs\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332567 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-bin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-multus\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332664 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wnj\" (UniqueName: \"kubernetes.io/projected/ede1a722-2df8-433e-b8be-82c434be7d02-kube-api-access-q4wnj\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2458ef0-c3e4-4bb4-9698-92445412cca7-hosts-file\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-cni-binary-copy\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332751 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-kubelet\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fab93456-303f-4c39-93a9-f52dcab12ac1-proxy-tls\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332816 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fab93456-303f-4c39-93a9-f52dcab12ac1-mcd-auth-proxy-config\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332841 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-cnibin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-netns\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332900 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-socket-dir-parent\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.332978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-os-release\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.333018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-conf-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.333056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-etc-kubernetes\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.333079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-hostroot\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.333459 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.350776 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.371463 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.387537 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.401578 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.406635 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.406809 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.418723 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-k8s-cni-cncf-io\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434665 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-multus-daemon-config\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-multus-certs\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434855 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fab93456-303f-4c39-93a9-f52dcab12ac1-rootfs\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fab93456-303f-4c39-93a9-f52dcab12ac1-rootfs\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434868 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-multus-certs\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434929 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6mz\" (UniqueName: \"kubernetes.io/projected/fab93456-303f-4c39-93a9-f52dcab12ac1-kube-api-access-dw6mz\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-bin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-multus\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435251 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-multus\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435270 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-cni-bin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4wnj\" (UniqueName: \"kubernetes.io/projected/ede1a722-2df8-433e-b8be-82c434be7d02-kube-api-access-q4wnj\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435458 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2458ef0-c3e4-4bb4-9698-92445412cca7-hosts-file\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435524 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-multus-daemon-config\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435530 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b2458ef0-c3e4-4bb4-9698-92445412cca7-hosts-file\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.434509 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-k8s-cni-cncf-io\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-cnibin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-cnibin\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435808 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-cni-binary-copy\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-kubelet\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fab93456-303f-4c39-93a9-f52dcab12ac1-proxy-tls\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fab93456-303f-4c39-93a9-f52dcab12ac1-mcd-auth-proxy-config\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.435928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-var-lib-kubelet\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-socket-dir-parent\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-netns\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-host-run-netns\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436322 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ede1a722-2df8-433e-b8be-82c434be7d02-cni-binary-copy\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-socket-dir-parent\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-os-release\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-conf-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-etc-kubernetes\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-hostroot\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436907 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-etc-kubernetes\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436465 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436951 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-os-release\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fab93456-303f-4c39-93a9-f52dcab12ac1-mcd-auth-proxy-config\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436953 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-hostroot\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.436861 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-multus-conf-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.437174 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-system-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.437251 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs274\" (UniqueName: \"kubernetes.io/projected/b2458ef0-c3e4-4bb4-9698-92445412cca7-kube-api-access-cs274\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.437318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ede1a722-2df8-433e-b8be-82c434be7d02-system-cni-dir\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.439963 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.441835 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fab93456-303f-4c39-93a9-f52dcab12ac1-proxy-tls\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.457169 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6mz\" (UniqueName: \"kubernetes.io/projected/fab93456-303f-4c39-93a9-f52dcab12ac1-kube-api-access-dw6mz\") pod \"machine-config-daemon-gz2wq\" (UID: \"fab93456-303f-4c39-93a9-f52dcab12ac1\") " pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.459613 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4wnj\" (UniqueName: \"kubernetes.io/projected/ede1a722-2df8-433e-b8be-82c434be7d02-kube-api-access-q4wnj\") pod \"multus-kbjvs\" (UID: \"ede1a722-2df8-433e-b8be-82c434be7d02\") " pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.461381 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs274\" (UniqueName: \"kubernetes.io/projected/b2458ef0-c3e4-4bb4-9698-92445412cca7-kube-api-access-cs274\") pod \"node-resolver-hhb6c\" (UID: \"b2458ef0-c3e4-4bb4-9698-92445412cca7\") " pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.465958 4687 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.468327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.468385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.468397 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.468490 4687 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.469006 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.479899 4687 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.480285 4687 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.481919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.481974 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.481994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.482021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.482036 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.487912 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.501535 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.507585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.507632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.507641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.507658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.507670 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.519026 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.525852 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.529728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.529792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.529810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.529838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.529855 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.538558 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.541830 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.546153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.546202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.546212 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.546231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.546242 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.573309 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.577162 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kbjvs" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.578379 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.579078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.579097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.579106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.579142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.579156 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.582150 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.588564 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-hhb6c" Dec 03 17:39:48 crc kubenswrapper[4687]: W1203 17:39:48.594279 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podede1a722_2df8_433e_b8be_82c434be7d02.slice/crio-c067a4f97c12c553b48c93bd65d35d0dad12e46ac0e3f146cfc57cd091e8c51c WatchSource:0}: Error finding container c067a4f97c12c553b48c93bd65d35d0dad12e46ac0e3f146cfc57cd091e8c51c: Status 404 returned error can't find the container with id c067a4f97c12c553b48c93bd65d35d0dad12e46ac0e3f146cfc57cd091e8c51c Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.615376 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.628821 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: E1203 17:39:48.628959 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.644166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.644217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.644229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.644251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.644263 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.651838 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.702581 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-hrqh4"] Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.703214 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.706306 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.706488 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.721798 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746030 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.746847 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.763931 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.778433 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.792285 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.809050 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.823602 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.840744 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841051 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841090 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-os-release\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841177 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqfw\" (UniqueName: \"kubernetes.io/projected/2eb80768-2a1e-4632-8f1f-453cce62fd5f-kube-api-access-hdqfw\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.841266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cnibin\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.849454 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.849498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.849510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.849531 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.849545 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.859437 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.879140 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.892740 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.906855 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.923283 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.939252 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-os-release\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-os-release\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942675 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942796 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqfw\" (UniqueName: \"kubernetes.io/projected/2eb80768-2a1e-4632-8f1f-453cce62fd5f-kube-api-access-hdqfw\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cnibin\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943098 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cnibin\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.942714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943413 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943598 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2eb80768-2a1e-4632-8f1f-453cce62fd5f-cni-binary-copy\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.943627 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2eb80768-2a1e-4632-8f1f-453cce62fd5f-system-cni-dir\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.952564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.952900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.952971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.953091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.953168 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:48Z","lastTransitionTime":"2025-12-03T17:39:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.956888 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.963085 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqfw\" (UniqueName: \"kubernetes.io/projected/2eb80768-2a1e-4632-8f1f-453cce62fd5f-kube-api-access-hdqfw\") pod \"multus-additional-cni-plugins-hrqh4\" (UID: \"2eb80768-2a1e-4632-8f1f-453cce62fd5f\") " pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:48 crc kubenswrapper[4687]: I1203 17:39:48.970402 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.029725 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.043824 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.043908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.043957 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:39:53.043930437 +0000 UTC m=+25.934625880 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.043980 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.044001 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.044028 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:53.044015269 +0000 UTC m=+25.934710702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.044207 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.044252 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:53.044242275 +0000 UTC m=+25.934937718 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: W1203 17:39:49.043822 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb80768_2a1e_4632_8f1f_453cce62fd5f.slice/crio-1f52d3a42954b542b9fd6c2a189c11756b2a84eca7818a3868930d5c01ee7ea0 WatchSource:0}: Error finding container 1f52d3a42954b542b9fd6c2a189c11756b2a84eca7818a3868930d5c01ee7ea0: Status 404 returned error can't find the container with id 1f52d3a42954b542b9fd6c2a189c11756b2a84eca7818a3868930d5c01ee7ea0 Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.056035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.056109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.056138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.056157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.056171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.102058 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-668q2"] Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.103588 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.107470 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.110625 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.110726 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.111005 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.111056 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.111236 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.114107 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.123880 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.145503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.145602 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145770 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145796 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145816 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145832 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145834 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145851 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145918 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:53.145897239 +0000 UTC m=+26.036592682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.145939 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:39:53.14593014 +0000 UTC m=+26.036625583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.150755 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.158922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.158981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.159000 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.159030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.159045 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.163426 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.181631 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.203938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.222169 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.240382 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246165 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kjw5\" (UniqueName: \"kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246235 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246542 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246563 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246731 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246855 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246893 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246918 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246941 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246962 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.246983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.247157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.247262 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262177 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262265 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.262313 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.291738 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.311973 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.336989 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348145 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348161 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348180 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kjw5\" (UniqueName: \"kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348201 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348216 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348287 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348318 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348365 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348480 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348555 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348596 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348601 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348653 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348761 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348791 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348891 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.348959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.349202 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.349398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.353158 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.361815 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.365307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.365345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.365355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.365373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.365384 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.369629 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kjw5\" (UniqueName: \"kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.380462 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.406567 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.406613 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.406756 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:49 crc kubenswrapper[4687]: E1203 17:39:49.406927 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.468583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.468616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.468624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.468640 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.468649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.528980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.529864 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerStarted","Data":"1f52d3a42954b542b9fd6c2a189c11756b2a84eca7818a3868930d5c01ee7ea0"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.531047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.531072 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"63ec9d796347ce9f7f4846f85714b7f7ba82d46c314cb39349b245b28a8da84d"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.531407 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib\") pod \"ovnkube-node-668q2\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.534972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hhb6c" event={"ID":"b2458ef0-c3e4-4bb4-9698-92445412cca7","Type":"ContainerStarted","Data":"df3c1e942adc94109d7ef7f9d6659ad6eb00c59cf9064ffd53797c4f37a5538f"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.536102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerStarted","Data":"261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.536142 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerStarted","Data":"c067a4f97c12c553b48c93bd65d35d0dad12e46ac0e3f146cfc57cd091e8c51c"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.571589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.571625 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.571635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.571650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.571660 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.674693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.674766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.674777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.674797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.674809 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.718471 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.777969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.778011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.778022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.778040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.778050 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.881505 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.881542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.881574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.881593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.881604 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.972401 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.984543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.984601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.984613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.984634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.984646 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:49Z","lastTransitionTime":"2025-12-03T17:39:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.990718 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.992339 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 03 17:39:49 crc kubenswrapper[4687]: I1203 17:39:49.997776 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.015621 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.037660 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.053728 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.068646 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.084980 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.087026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.087052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.087062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.087079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.087092 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.104404 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.123293 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.148592 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.166214 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.176934 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.194539 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.194591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.194602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.194620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.194631 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.205237 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.219666 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.234456 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.256039 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.270159 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.286352 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.297761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.297807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.297816 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.297832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.297842 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.310447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.323982 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.334572 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.347890 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.361978 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.377401 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.400962 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.401343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.401365 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.401374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.401396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.401406 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.406504 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:50 crc kubenswrapper[4687]: E1203 17:39:50.406622 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.415552 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.438351 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.451911 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.504828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.504886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.504904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.504926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.504942 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.542769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.544173 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" exitCode=0 Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.544258 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.544320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"74f087e99c31a3e0a1bd0519a026b43b9cb105eed5b44f6261647bc63f7809be"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.545834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-hhb6c" event={"ID":"b2458ef0-c3e4-4bb4-9698-92445412cca7","Type":"ContainerStarted","Data":"233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.548717 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f" exitCode=0 Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.548966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.581560 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.599146 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.609140 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.609191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.609204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.609225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.609237 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.612016 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.631784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.648078 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.667573 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.689580 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.702753 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.713811 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.713848 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.713859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.713884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.714328 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.723286 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.742921 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.758707 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.771272 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.790124 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.807714 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.818847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.818899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.818912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.818938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.818952 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.836578 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.864099 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.886722 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.913474 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.920922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.920950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.920959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.920973 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.920983 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:50Z","lastTransitionTime":"2025-12-03T17:39:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.926086 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.953958 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.968003 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.978990 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:50 crc kubenswrapper[4687]: I1203 17:39:50.995885 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:50Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.009707 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.023130 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.025251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.025294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.025311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.025333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.025351 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.041875 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.056734 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.071826 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.128103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.128361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.128446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.128519 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.128586 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.232467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.232510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.232522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.232540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.232551 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.335464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.335963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.335975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.335997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.336010 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.406812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.406830 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:51 crc kubenswrapper[4687]: E1203 17:39:51.407031 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:51 crc kubenswrapper[4687]: E1203 17:39:51.407090 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.438226 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.438289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.438303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.438329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.438344 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.541772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.541872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.541920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.542088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.542593 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.557276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.557346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.557360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.557373 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.557386 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.560536 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175" exitCode=0 Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.560581 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.576726 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.604762 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.622946 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.638432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.646445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.646484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.646495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.646512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.646524 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.653887 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.669609 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.683847 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.699751 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.713886 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.724905 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.749366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.749440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.749456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.749477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.749492 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.750171 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.766252 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.783762 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.805430 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:51Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.853464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.853518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.853552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.853576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.853588 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.957362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.957436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.957449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.957492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:51 crc kubenswrapper[4687]: I1203 17:39:51.957511 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:51Z","lastTransitionTime":"2025-12-03T17:39:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.061871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.061948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.061963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.061990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.062027 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.166133 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.166272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.166301 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.166335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.166363 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.269711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.269812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.269824 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.269850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.269864 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.372837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.372903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.372920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.372945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.372964 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.406378 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:52 crc kubenswrapper[4687]: E1203 17:39:52.406580 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.476287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.476359 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.476373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.476419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.476437 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.549351 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7bvc5"] Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.549880 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.551617 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.553589 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.554583 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.555966 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.566984 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b" exitCode=0 Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.567048 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.569978 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.571981 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.578929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.578967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.578976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.578992 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.579001 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.591052 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.612312 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.628300 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.641706 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.655360 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.672961 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.682831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.682873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.682885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.682904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.682918 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.686867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb6870b7-890e-4352-b873-f6676b3315bd-serviceca\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.687009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb6870b7-890e-4352-b873-f6676b3315bd-host\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.687070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8nb8\" (UniqueName: \"kubernetes.io/projected/cb6870b7-890e-4352-b873-f6676b3315bd-kube-api-access-l8nb8\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.688734 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.702605 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.714995 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.728981 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.750861 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.763626 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.775122 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.786246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.786294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.786304 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.786324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.786337 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.787751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb6870b7-890e-4352-b873-f6676b3315bd-serviceca\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.787800 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb6870b7-890e-4352-b873-f6676b3315bd-host\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.787824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8nb8\" (UniqueName: \"kubernetes.io/projected/cb6870b7-890e-4352-b873-f6676b3315bd-kube-api-access-l8nb8\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.788921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb6870b7-890e-4352-b873-f6676b3315bd-serviceca\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.788972 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb6870b7-890e-4352-b873-f6676b3315bd-host\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.795681 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.809525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8nb8\" (UniqueName: \"kubernetes.io/projected/cb6870b7-890e-4352-b873-f6676b3315bd-kube-api-access-l8nb8\") pod \"node-ca-7bvc5\" (UID: \"cb6870b7-890e-4352-b873-f6676b3315bd\") " pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.814246 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.825908 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.837256 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.857598 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.864770 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7bvc5" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.871887 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.886198 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.889678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.889705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.889717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.889732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.889742 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.921769 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.943809 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.967280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.987176 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:52Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.993058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.993093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.993101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.993115 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:52 crc kubenswrapper[4687]: I1203 17:39:52.993127 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:52Z","lastTransitionTime":"2025-12-03T17:39:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.004090 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.020283 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.034350 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.044772 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.058268 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.090462 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.090620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.090714 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.090762 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:40:01.090704148 +0000 UTC m=+33.981399581 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.090821 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:01.090806781 +0000 UTC m=+33.981502434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.090875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.091391 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.091456 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:01.091436567 +0000 UTC m=+33.982132180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.095765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.095880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.096138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.096182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.096203 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.193690 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.193762 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.193947 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.193967 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.193982 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.194075 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:01.194054257 +0000 UTC m=+34.084749680 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.194520 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.194569 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.194584 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.194668 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:01.194645833 +0000 UTC m=+34.085341276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.199371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.199410 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.199419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.199434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.199445 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.302184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.302232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.302243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.302259 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.302270 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.408602 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.408636 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.408751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:53 crc kubenswrapper[4687]: E1203 17:39:53.409074 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.411583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.411620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.411632 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.411650 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.411662 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.515187 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.515260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.515272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.515288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.515299 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.579868 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f" exitCode=0 Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.579959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.583042 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7bvc5" event={"ID":"cb6870b7-890e-4352-b873-f6676b3315bd","Type":"ContainerStarted","Data":"b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.583073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7bvc5" event={"ID":"cb6870b7-890e-4352-b873-f6676b3315bd","Type":"ContainerStarted","Data":"e501e17b8db8dee4b3f42a73c6bfdbc50fd9750e633d7905d43508767cf3781f"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.595017 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.619136 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.620511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.620564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.620587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.620615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.620638 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.633508 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.648455 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.660655 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.674575 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.691813 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.709034 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.725029 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.725077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.725087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.725105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.725114 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.726746 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.739598 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.753902 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.769998 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.785064 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.798778 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.827682 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.830166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.830211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.830225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.830244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.830255 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.844975 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.858257 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.872844 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.882942 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.899052 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.917554 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.932353 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.934005 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.934062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.934074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.934092 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.934102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:53Z","lastTransitionTime":"2025-12-03T17:39:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.949582 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.979405 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:53 crc kubenswrapper[4687]: I1203 17:39:53.993089 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:53Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.009536 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.025967 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.037722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.037777 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.037790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.037815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.037849 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.041634 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.058171 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.070537 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.140047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.140134 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.140147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.140168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.140182 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.243191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.243257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.243275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.243300 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.243319 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.346252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.346328 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.346354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.346386 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.346410 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.407163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:54 crc kubenswrapper[4687]: E1203 17:39:54.407353 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.448760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.448810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.448823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.448842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.448855 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.552041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.552178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.552205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.552245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.552308 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.595544 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7" exitCode=0 Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.595669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.601637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.633120 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.649886 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.656958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.657025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.657035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.657062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.657077 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.666649 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.687859 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.705351 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.720071 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.736862 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.756970 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.760473 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.760522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.760532 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.760554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.760570 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.773798 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.787390 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.805043 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.822030 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.851884 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.863526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.863579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.863593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.863619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.863635 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.893346 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.921811 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:54Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.966608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.966654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.966669 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.966686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:54 crc kubenswrapper[4687]: I1203 17:39:54.966696 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:54Z","lastTransitionTime":"2025-12-03T17:39:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.070778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.071438 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.071602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.071749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.071996 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.175542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.175587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.175598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.175619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.175632 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.279062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.279163 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.279183 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.279213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.279237 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.383270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.383356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.383371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.383391 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.383402 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.406828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.406890 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:55 crc kubenswrapper[4687]: E1203 17:39:55.407028 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:55 crc kubenswrapper[4687]: E1203 17:39:55.407253 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.486587 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.486657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.486675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.486702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.486770 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.590162 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.590222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.590236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.590257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.590269 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.610590 4687 generic.go:334] "Generic (PLEG): container finished" podID="2eb80768-2a1e-4632-8f1f-453cce62fd5f" containerID="ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29" exitCode=0 Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.610658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerDied","Data":"ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.636570 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.658076 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.670803 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.688943 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.694263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.694295 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.694306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.694323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.694334 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.703186 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.722569 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.733766 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.746114 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.765376 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.779938 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.793790 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.796921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.796952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.796960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.796976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.796990 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.808897 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.822850 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.831316 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.842587 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:55Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.898903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.899500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.899528 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.899567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:55 crc kubenswrapper[4687]: I1203 17:39:55.899596 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:55Z","lastTransitionTime":"2025-12-03T17:39:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.003398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.003464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.003482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.003508 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.003526 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.107357 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.107413 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.107423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.107444 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.107456 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.210424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.210482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.210498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.210524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.210547 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.313611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.313657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.313669 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.313687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.313699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.406765 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:56 crc kubenswrapper[4687]: E1203 17:39:56.406939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.416218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.416280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.416300 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.416324 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.416343 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.519229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.519295 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.519312 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.519340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.519362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.619649 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.620072 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.620103 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.621562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.621588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.621600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.621615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.621626 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.625471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" event={"ID":"2eb80768-2a1e-4632-8f1f-453cce62fd5f","Type":"ContainerStarted","Data":"24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.645816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.645915 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.650569 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.664084 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.680875 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.698215 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.716078 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.725087 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.725131 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.725157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.725176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.725185 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.742415 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.754936 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.767279 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.788680 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.802489 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.819380 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.827698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.827730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.827738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.827751 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.827760 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.833749 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.847331 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.858743 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.873267 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.898886 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.910811 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.923238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.929673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.929710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.929726 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.929746 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.929761 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:56Z","lastTransitionTime":"2025-12-03T17:39:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.941820 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.954836 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.965034 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.977518 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:56 crc kubenswrapper[4687]: I1203 17:39:56.992265 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:56Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.005962 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.017636 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.032130 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.032198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.032209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.032228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.032240 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.033234 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.051678 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.069739 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.083551 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.095220 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.134946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.134996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.135009 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.135025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.135037 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.236927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.236960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.236971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.236987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.237000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.339446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.339529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.339570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.339590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.339602 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.406955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.407032 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:57 crc kubenswrapper[4687]: E1203 17:39:57.407192 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:57 crc kubenswrapper[4687]: E1203 17:39:57.407406 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.429947 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.441609 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.441677 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.441699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.441768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.441790 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.446791 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.468999 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.488744 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.507625 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.528742 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.544012 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.545574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.545682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.545708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.545737 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.545756 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.554761 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.570918 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.581795 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.594729 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.607272 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.621076 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.629797 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.633293 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.648759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.648797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.648807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.648822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.648834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.651330 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:57Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.751518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.751571 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.751580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.751594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.751603 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.854810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.854856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.854869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.854888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.854901 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.957080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.957157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.957171 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.957199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:57 crc kubenswrapper[4687]: I1203 17:39:57.957214 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:57Z","lastTransitionTime":"2025-12-03T17:39:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.059313 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.059537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.059608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.059678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.059737 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.161596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.161851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.161861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.161876 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.161885 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.266873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.266927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.266938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.266956 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.266967 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.373097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.373144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.373158 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.373179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.373190 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.406694 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:39:58 crc kubenswrapper[4687]: E1203 17:39:58.406963 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.476523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.476588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.476605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.476630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.476646 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.578995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.579240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.579305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.579380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.579439 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.632632 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.685941 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.686135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.686224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.686294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.686351 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.789387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.789429 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.789440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.789458 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.789470 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.864523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.864578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.864591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.864610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.864622 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: E1203 17:39:58.881221 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:58Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.886439 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.886484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.886495 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.886513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:58 crc kubenswrapper[4687]: I1203 17:39:58.886523 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:58Z","lastTransitionTime":"2025-12-03T17:39:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:58 crc kubenswrapper[4687]: E1203 17:39:58.900546 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:58Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.138623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.138685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.138702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.138727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.138745 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.152844 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.156689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.156790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.156828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.156847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.156856 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.170888 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.175173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.175222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.175238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.175262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.175280 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.193070 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.193294 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.195434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.195480 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.195570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.195596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.195610 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.298305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.298373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.298396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.298423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.298440 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.401562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.401609 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.401623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.401643 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.401656 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.407803 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.407939 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.408027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:39:59 crc kubenswrapper[4687]: E1203 17:39:59.408282 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.505636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.505710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.505735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.505769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.505792 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.607840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.607906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.607924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.607950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.607967 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.637795 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/0.log" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.641239 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c" exitCode=1 Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.641346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.641954 4687 scope.go:117] "RemoveContainer" containerID="cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.660556 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.676499 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.694126 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.708165 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.710039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.710068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.710080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.710098 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.710116 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.726056 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.749607 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.764385 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.777861 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.791362 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.805797 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.812289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.812596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.812611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.812631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.812644 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.822235 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.842950 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.855979 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.885236 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.899016 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:39:59Z is after 2025-08-24T17:21:41Z" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.915398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.916037 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.916243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.916340 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:39:59 crc kubenswrapper[4687]: I1203 17:39:59.916419 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:39:59Z","lastTransitionTime":"2025-12-03T17:39:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.019057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.019374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.019497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.019608 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.019678 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.122827 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.123035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.123123 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.123223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.123297 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.226202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.226256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.226272 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.226331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.226348 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.328213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.328252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.328260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.328276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.328285 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.406445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:00 crc kubenswrapper[4687]: E1203 17:40:00.406597 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.430370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.430412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.430424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.430441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.430452 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.532239 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.532279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.532291 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.532307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.532317 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.634627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.634669 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.634680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.634696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.634708 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.646940 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/0.log" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.651040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.651241 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.668821 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.687097 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.703627 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.717365 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.736171 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.736888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.736948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.736966 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.736988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.737010 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.753516 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.767420 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.779858 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.796642 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.810352 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.830519 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.839011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.839040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.839049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.839061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.839071 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.862726 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.878334 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.898993 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.910985 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:00Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.941630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.941667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.941679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.941695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:00 crc kubenswrapper[4687]: I1203 17:40:00.941705 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:00Z","lastTransitionTime":"2025-12-03T17:40:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.044463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.044503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.044513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.044530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.044540 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.140413 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.146357 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.146548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.146610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.146678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.146745 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.158413 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.169001 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.180487 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.180625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.180674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.180752 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:40:17.180721935 +0000 UTC m=+50.071417408 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.180760 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.180845 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:17.180829628 +0000 UTC m=+50.071525131 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.180892 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.181020 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:17.180989372 +0000 UTC m=+50.071684845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.182201 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.200855 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.216314 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.229115 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.241451 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.249651 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.249793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.249975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.250166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.250334 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.254661 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.267796 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.280043 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.281537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.281748 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.281785 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.281800 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.281861 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:17.281841494 +0000 UTC m=+50.172536927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.282227 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.282307 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.282331 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.282454 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:17.28242186 +0000 UTC m=+50.173117333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.282583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.293580 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.307352 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.320433 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.332550 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.346078 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.352606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.352652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.352665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.352683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.352697 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.407168 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.407260 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.407301 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.407443 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.455303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.455591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.455697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.455772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.455830 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.557987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.558041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.558057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.558080 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.558186 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.656647 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/1.log" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.658151 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/0.log" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.660345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.660388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.660399 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.660419 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.660434 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.662546 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b" exitCode=1 Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.662592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.662643 4687 scope.go:117] "RemoveContainer" containerID="cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.663539 4687 scope.go:117] "RemoveContainer" containerID="9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b" Dec 03 17:40:01 crc kubenswrapper[4687]: E1203 17:40:01.663717 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.680784 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.697337 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.718106 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.739321 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.752331 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.762868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.762907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.762946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.762964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.762977 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.766593 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.785324 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.798926 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.815608 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.838373 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.855233 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.866388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.866441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.866455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.866487 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.866501 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.870930 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.891614 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.913912 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.942085 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:01Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.969281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.969434 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.969501 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.969596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:01 crc kubenswrapper[4687]: I1203 17:40:01.969799 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:01Z","lastTransitionTime":"2025-12-03T17:40:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.055311 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp"] Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.056017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.059554 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.062183 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.073557 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.073614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.073630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.073653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.073664 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.078897 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.097432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.113246 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.127660 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.143997 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.162240 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.176418 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.176547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.176667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.176954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.177202 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.180537 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.192292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.192362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f507ce27-2982-4592-a5d5-f7b78e85363a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.192440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwc95\" (UniqueName: \"kubernetes.io/projected/f507ce27-2982-4592-a5d5-f7b78e85363a-kube-api-access-gwc95\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.192502 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.193525 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.211652 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.229264 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.242936 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.258339 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279692 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279707 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.279993 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.293982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.294040 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f507ce27-2982-4592-a5d5-f7b78e85363a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.294113 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwc95\" (UniqueName: \"kubernetes.io/projected/f507ce27-2982-4592-a5d5-f7b78e85363a-kube-api-access-gwc95\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.294183 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.294856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.295244 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f507ce27-2982-4592-a5d5-f7b78e85363a-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.298048 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.309953 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f507ce27-2982-4592-a5d5-f7b78e85363a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.313768 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwc95\" (UniqueName: \"kubernetes.io/projected/f507ce27-2982-4592-a5d5-f7b78e85363a-kube-api-access-gwc95\") pod \"ovnkube-control-plane-749d76644c-nkgnp\" (UID: \"f507ce27-2982-4592-a5d5-f7b78e85363a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.322960 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.336732 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.377143 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.383288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.383353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.383373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.383404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.383424 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: W1203 17:40:02.394286 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf507ce27_2982_4592_a5d5_f7b78e85363a.slice/crio-8994679e26041bdd5677b006156accee0fb7bf6dc1b9b2dc6e55d069bc0f73df WatchSource:0}: Error finding container 8994679e26041bdd5677b006156accee0fb7bf6dc1b9b2dc6e55d069bc0f73df: Status 404 returned error can't find the container with id 8994679e26041bdd5677b006156accee0fb7bf6dc1b9b2dc6e55d069bc0f73df Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.406726 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:02 crc kubenswrapper[4687]: E1203 17:40:02.406882 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.485977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.486021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.486035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.486053 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.486068 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.588369 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.588425 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.588436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.588460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.588471 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.667720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" event={"ID":"f507ce27-2982-4592-a5d5-f7b78e85363a","Type":"ContainerStarted","Data":"42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.667796 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" event={"ID":"f507ce27-2982-4592-a5d5-f7b78e85363a","Type":"ContainerStarted","Data":"8994679e26041bdd5677b006156accee0fb7bf6dc1b9b2dc6e55d069bc0f73df"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.670269 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/1.log" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.690438 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.690479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.690491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.690513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.690526 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.794085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.794151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.794165 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.794189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.794202 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.810055 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-w8876"] Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.811108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:02 crc kubenswrapper[4687]: E1203 17:40:02.811287 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.846951 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.865665 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.879291 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.897817 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.897873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.897891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.897922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.897944 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:02Z","lastTransitionTime":"2025-12-03T17:40:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.900294 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.921693 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.936210 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.948816 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.968487 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.982603 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:02 crc kubenswrapper[4687]: I1203 17:40:02.994157 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:02Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.000605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.000679 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.000701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.000728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.000746 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.001181 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.001328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzdm\" (UniqueName: \"kubernetes.io/projected/2c067216-97d2-43a1-a8a6-5719153b3c61-kube-api-access-ppzdm\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.009254 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.023574 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.035426 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.048643 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.059659 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.080701 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.094431 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.102496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzdm\" (UniqueName: \"kubernetes.io/projected/2c067216-97d2-43a1-a8a6-5719153b3c61-kube-api-access-ppzdm\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.102582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.102820 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.103091 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:03.603061049 +0000 UTC m=+36.493756502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.104047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.104116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.104150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.104173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.104190 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.122469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzdm\" (UniqueName: \"kubernetes.io/projected/2c067216-97d2-43a1-a8a6-5719153b3c61-kube-api-access-ppzdm\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.207892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.207942 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.207955 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.207982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.208000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.311203 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.311249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.311260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.311279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.311291 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.406619 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.406687 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.406800 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.406886 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.413705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.413781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.413797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.413826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.413842 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.517592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.517631 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.517639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.517656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.517669 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.609346 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.609490 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:03 crc kubenswrapper[4687]: E1203 17:40:03.609552 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:04.609536667 +0000 UTC m=+37.500232110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.620111 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.620196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.620213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.620244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.620259 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.679875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" event={"ID":"f507ce27-2982-4592-a5d5-f7b78e85363a","Type":"ContainerStarted","Data":"28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.707829 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.721519 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.723158 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.723248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.723267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.723292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.723332 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.734099 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.747205 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.764780 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.780587 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.795772 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.809855 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.826298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.826348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.826362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.826381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.826393 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.827425 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.845925 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.862592 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.874047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.895764 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.913623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.929279 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.929335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.929351 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.929375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.929392 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:03Z","lastTransitionTime":"2025-12-03T17:40:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.938931 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.957461 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:03 crc kubenswrapper[4687]: I1203 17:40:03.976378 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:03Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.032160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.032202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.032216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.032233 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.032244 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.135492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.135526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.135535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.135554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.135564 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.237740 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.238194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.238415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.238596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.238735 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.341173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.341212 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.341225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.341243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.341255 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.407387 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.407447 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:04 crc kubenswrapper[4687]: E1203 17:40:04.407576 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:04 crc kubenswrapper[4687]: E1203 17:40:04.407710 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.444515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.444603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.444625 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.444657 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.444681 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.547596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.547902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.547980 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.548057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.548154 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.620027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:04 crc kubenswrapper[4687]: E1203 17:40:04.620340 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:04 crc kubenswrapper[4687]: E1203 17:40:04.620453 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:06.620420565 +0000 UTC m=+39.511116038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.652320 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.652410 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.652436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.652470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.652494 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.755782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.755866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.755891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.755925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.755953 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.858299 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.858373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.858385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.858407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.858419 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.961440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.961490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.961500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.961518 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:04 crc kubenswrapper[4687]: I1203 17:40:04.961530 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:04Z","lastTransitionTime":"2025-12-03T17:40:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.064153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.064228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.064252 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.064283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.064306 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.166616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.166656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.166685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.166703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.166713 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.270285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.270354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.270373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.270400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.270421 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.373378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.373441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.373459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.373486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.373507 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.406948 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.407099 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:05 crc kubenswrapper[4687]: E1203 17:40:05.407171 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:05 crc kubenswrapper[4687]: E1203 17:40:05.407415 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.479091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.479168 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.479178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.479195 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.479205 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.582744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.582789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.582807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.582831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.582848 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.686414 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.686496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.686515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.686533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.686544 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.789422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.790402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.790475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.790505 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.790519 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.893795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.893889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.893917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.893949 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.893973 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.997862 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.997917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.997928 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.997951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:05 crc kubenswrapper[4687]: I1203 17:40:05.997964 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:05Z","lastTransitionTime":"2025-12-03T17:40:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.101966 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.102023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.102036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.102058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.102071 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.205916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.205985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.206004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.206031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.206051 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.309978 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.310033 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.310051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.310111 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.310168 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.407059 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.407059 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:06 crc kubenswrapper[4687]: E1203 17:40:06.407366 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:06 crc kubenswrapper[4687]: E1203 17:40:06.407491 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.413774 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.413842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.413869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.413901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.413925 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.516449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.516548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.516562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.516580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.516592 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.619757 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.619837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.619857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.619887 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.619909 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.642661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:06 crc kubenswrapper[4687]: E1203 17:40:06.642874 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:06 crc kubenswrapper[4687]: E1203 17:40:06.642996 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:10.642963482 +0000 UTC m=+43.533658955 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.723054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.723164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.723189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.723216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.723238 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.825986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.826066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.826088 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.826150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.826178 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.929327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.929383 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.929401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.929423 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:06 crc kubenswrapper[4687]: I1203 17:40:06.929440 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:06Z","lastTransitionTime":"2025-12-03T17:40:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.032809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.032952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.033017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.033051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.033076 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.136718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.136783 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.136797 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.136813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.136826 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.240683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.240734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.240745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.240767 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.240783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.344545 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.344607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.344627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.344654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.344674 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.406423 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.406477 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:07 crc kubenswrapper[4687]: E1203 17:40:07.406601 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:07 crc kubenswrapper[4687]: E1203 17:40:07.406813 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.422814 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.437471 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.447508 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.447562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.447573 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.447593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.447607 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.453386 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.474953 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.496413 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.511418 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.543194 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.550119 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.550238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.550260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.550296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.550366 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.560290 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.583699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbb157b6eb58ac671dc3beb9af2951aeabe01637e0ed3b6d0e02bb553e45144c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:39:59Z\\\",\\\"message\\\":\\\"om/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1203 17:39:58.696638 6011 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:39:58.697416 6011 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:39:58.697455 6011 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:39:58.697499 6011 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:39:58.697604 6011 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:39:58.697543 6011 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1203 17:39:58.697636 6011 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1203 17:39:58.697664 6011 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:39:58.697706 6011 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:39:58.697710 6011 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:39:58.697734 6011 handler.go:208] Removed *v1.Node event handler 7\\\\nI1203 17:39:58.697750 6011 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1203 17:39:58.697790 6011 factory.go:656] Stopping watch factory\\\\nI1203 17:39:58.697815 6011 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:39:5\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.603472 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.617402 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.633456 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.646651 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.653616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.653672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.653687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.653706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.653719 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.662588 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.679405 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.701987 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.722625 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:07Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.756232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.756271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.756281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.756298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.756311 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.858653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.858718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.858730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.858745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.858755 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.961946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.962308 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.962318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.962333 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:07 crc kubenswrapper[4687]: I1203 17:40:07.962345 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:07Z","lastTransitionTime":"2025-12-03T17:40:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.065167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.065234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.065246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.065290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.065304 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.168104 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.168189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.168205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.168228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.168246 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.271225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.271335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.271355 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.271385 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.271402 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.374802 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.374866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.374882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.374908 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.374963 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.407337 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.407543 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:08 crc kubenswrapper[4687]: E1203 17:40:08.407713 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:08 crc kubenswrapper[4687]: E1203 17:40:08.407886 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.478543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.478612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.478676 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.478704 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.478763 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.582167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.582565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.582667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.582821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.582924 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.686759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.686832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.686856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.686888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.686909 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.789636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.789710 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.789728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.789754 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.789775 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.893193 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.893264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.893287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.893318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.893342 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.996842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.996958 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.996986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.997023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:08 crc kubenswrapper[4687]: I1203 17:40:08.997049 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:08Z","lastTransitionTime":"2025-12-03T17:40:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.100016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.100082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.100107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.100164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.100183 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.202644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.202706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.202730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.202761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.202783 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.305828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.305889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.305906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.305932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.305951 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.407151 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.407382 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.407158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.407962 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.409292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.409494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.409523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.409544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.409561 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.415470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.415504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.415515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.415533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.415545 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.429713 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:09Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.439184 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.439232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.439246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.439270 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.439288 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.455860 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:09Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.462303 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.462364 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.462382 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.462407 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.462425 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.505767 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:09Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.515190 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.515263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.515287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.515319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.515343 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.539369 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:09Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.544245 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.544437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.544537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.544618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.544683 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.560161 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:09Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:09 crc kubenswrapper[4687]: E1203 17:40:09.560314 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.562220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.562251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.562263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.562280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.562292 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.664950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.664998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.665017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.665036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.665048 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.767682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.767729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.767738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.767755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.767769 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.871492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.871554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.871567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.871589 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.871601 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.974539 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.974578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.974591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.974607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:09 crc kubenswrapper[4687]: I1203 17:40:09.974617 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:09Z","lastTransitionTime":"2025-12-03T17:40:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.077222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.077284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.077298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.077339 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.077354 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.180583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.180635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.180647 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.180664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.180675 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.283271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.283363 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.283379 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.283395 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.283404 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.386412 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.386442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.386450 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.386464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.386473 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.406871 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:10 crc kubenswrapper[4687]: E1203 17:40:10.406986 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.406871 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:10 crc kubenswrapper[4687]: E1203 17:40:10.407096 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.490073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.490144 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.490155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.490175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.490186 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.592577 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.592644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.592665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.592688 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.592706 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.690500 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:10 crc kubenswrapper[4687]: E1203 17:40:10.690637 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:10 crc kubenswrapper[4687]: E1203 17:40:10.690698 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:18.690680039 +0000 UTC m=+51.581375482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.697178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.697229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.697242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.697264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.697289 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.799290 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.799350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.799362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.799389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.799403 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.902146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.902219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.902232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.902250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:10 crc kubenswrapper[4687]: I1203 17:40:10.902264 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:10Z","lastTransitionTime":"2025-12-03T17:40:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.005234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.005283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.005298 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.005318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.005337 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.108230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.108282 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.108293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.108311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.108325 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.211629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.211693 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.211706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.211722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.211734 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.318044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.318084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.318095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.318110 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.318140 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.406616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.406693 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:11 crc kubenswrapper[4687]: E1203 17:40:11.406771 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:11 crc kubenswrapper[4687]: E1203 17:40:11.406873 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.419809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.419866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.419885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.419909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.419928 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.523482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.523875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.524022 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.524195 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.524337 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.627792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.627847 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.627870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.627896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.627914 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.731230 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.731618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.731787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.731959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.732185 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.834867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.834951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.834975 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.835005 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.835028 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.937433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.937673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.937779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.937877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:11 crc kubenswrapper[4687]: I1203 17:40:11.937951 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:11Z","lastTransitionTime":"2025-12-03T17:40:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.041750 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.041788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.041835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.041850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.041860 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.144870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.144945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.144969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.144997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.145019 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.248089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.248199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.248222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.248248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.248267 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.351185 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.351235 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.351246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.351455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.351471 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.407378 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:12 crc kubenswrapper[4687]: E1203 17:40:12.407805 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.407428 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:12 crc kubenswrapper[4687]: E1203 17:40:12.408076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.454003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.454040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.454051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.454086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.454097 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.557069 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.557107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.557137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.557154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.557165 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.659891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.660161 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.660193 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.660225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.660246 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.762826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.763077 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.763349 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.763459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.763537 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.867899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.868566 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.868786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.869026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.869297 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.972158 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.972448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.972508 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.972575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:12 crc kubenswrapper[4687]: I1203 17:40:12.972645 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:12Z","lastTransitionTime":"2025-12-03T17:40:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.076160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.077049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.077249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.077404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.077554 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.180232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.180275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.180293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.180316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.180332 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.284019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.284109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.284164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.284192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.284210 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.387052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.387103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.387150 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.387172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.387188 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.406957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:13 crc kubenswrapper[4687]: E1203 17:40:13.407171 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.407410 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:13 crc kubenswrapper[4687]: E1203 17:40:13.407488 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.489935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.489982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.489995 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.490014 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.490025 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.593694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.593755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.593778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.593809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.593834 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.696939 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.696996 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.697019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.697045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.697063 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.799361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.799393 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.799402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.799416 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.799425 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.902405 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.902468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.902486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.902510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:13 crc kubenswrapper[4687]: I1203 17:40:13.902527 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:13Z","lastTransitionTime":"2025-12-03T17:40:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.004701 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.004743 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.004753 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.004768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.004779 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.107457 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.107529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.107552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.107583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.107606 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.210468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.210560 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.210579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.210612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.210634 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.314645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.314795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.314817 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.314848 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.314867 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.407084 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.407174 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:14 crc kubenswrapper[4687]: E1203 17:40:14.407262 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:14 crc kubenswrapper[4687]: E1203 17:40:14.407458 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.417642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.417708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.417730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.417761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.417781 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.521851 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.521985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.522004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.522036 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.522084 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.625285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.625362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.625378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.625403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.625420 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.728921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.728979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.728998 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.729024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.729044 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.831455 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.831512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.831524 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.831543 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.831555 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.934147 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.934189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.934199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.934220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:14 crc kubenswrapper[4687]: I1203 17:40:14.934234 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:14Z","lastTransitionTime":"2025-12-03T17:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.037869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.037936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.037957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.037999 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.038025 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.141178 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.141225 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.141236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.141253 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.141265 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.244962 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.245040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.245071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.245102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.245171 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.347427 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.347476 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.347490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.347511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.347524 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.406613 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:15 crc kubenswrapper[4687]: E1203 17:40:15.406809 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.406838 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:15 crc kubenswrapper[4687]: E1203 17:40:15.406958 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.407516 4687 scope.go:117] "RemoveContainer" containerID="9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.431789 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.451040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.451118 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.451177 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.451208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.451232 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.457694 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.478339 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.494162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.511608 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.536701 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.554644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.554695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.554713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.554737 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.554755 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.566483 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.580867 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.598452 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.613727 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.639383 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.657062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.657106 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.657138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.657156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.657169 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.662067 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.686693 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.707005 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.727508 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.746652 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.760619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.760702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.760725 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.760755 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.760777 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.761390 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:15Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.864112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.864210 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.864326 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.864498 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.864528 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.967614 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.967658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.967675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.967698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:15 crc kubenswrapper[4687]: I1203 17:40:15.967716 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:15Z","lastTransitionTime":"2025-12-03T17:40:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.070017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.070056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.070081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.070095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.070103 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.173529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.173583 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.173594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.173612 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.173627 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.406820 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.406846 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:16 crc kubenswrapper[4687]: E1203 17:40:16.407245 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:16 crc kubenswrapper[4687]: E1203 17:40:16.407400 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.416325 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.416358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.416367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.416381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.416392 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.518930 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.518957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.518965 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.518994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.519002 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.622487 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.622562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.622575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.622597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.622610 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.725869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.725950 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.725972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.726004 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.726022 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.733542 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/1.log" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.737827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.738009 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.762741 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.783834 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.803691 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.815932 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.828874 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.828918 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.828935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.828957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.828974 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.832623 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.856463 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.868557 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.880190 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.894927 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.917867 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.930883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.930916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.930924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.930937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.930947 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:16Z","lastTransitionTime":"2025-12-03T17:40:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.938770 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.951236 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.968760 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:16 crc kubenswrapper[4687]: I1203 17:40:16.984272 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.001077 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:16Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.019341 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.032717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.032768 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.032787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.032812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.032832 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.033203 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.135562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.135680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.135691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.135706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.135717 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.238778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.238841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.238858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.238883 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.238900 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.264710 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.264899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.264971 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:40:49.264941195 +0000 UTC m=+82.155636658 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.265027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.265044 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.265171 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:49.2651026 +0000 UTC m=+82.155798073 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.265235 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.265298 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:49.265281495 +0000 UTC m=+82.155976958 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.267760 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.340600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.340638 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.340652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.340672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.340687 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.366708 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.366821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367042 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367068 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367084 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367107 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367148 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367171 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367220 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:49.367199405 +0000 UTC m=+82.257894918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.367270 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:49.367240207 +0000 UTC m=+82.257935710 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.407372 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.407382 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.407524 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.407633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.421019 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.436553 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.443045 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.443101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.443113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.443153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.443166 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.455262 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.476213 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.494018 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.508549 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.524724 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.545884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.545914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.545921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.545934 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.545944 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.547477 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.561983 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.573221 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.588581 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.613764 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.631233 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.647628 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.649603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.649655 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.649671 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.649695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.649713 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.664506 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.681829 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.709720 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.743922 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/2.log" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.744990 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/1.log" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.748310 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" exitCode=1 Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.748391 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.748464 4687 scope.go:117] "RemoveContainer" containerID="9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.749867 4687 scope.go:117] "RemoveContainer" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" Dec 03 17:40:17 crc kubenswrapper[4687]: E1203 17:40:17.750101 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.751408 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.751456 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.751473 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.751496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.751512 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.769287 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.783025 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.806206 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9d64d93ecfd7312456b316786c1f20be270423235f672af96b755c2733285c9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"message\\\":\\\"7:40:00.490789 6137 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490838 6137 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.490897 6137 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1203 17:40:00.491351 6137 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1203 17:40:00.491364 6137 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1203 17:40:00.491392 6137 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1203 17:40:00.491461 6137 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1203 17:40:00.491469 6137 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1203 17:40:00.491514 6137 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1203 17:40:00.491540 6137 factory.go:656] Stopping watch factory\\\\nI1203 17:40:00.491554 6137 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1203 17:40:00.491562 6137 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1203 17:40:00.491568 6137 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1203 17:40:00.491573 6137 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.819874 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.833584 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.854400 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.854474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.854486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.854507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.854521 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.856293 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.884041 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.903711 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.929034 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.956203 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.957983 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.958032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.958048 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.958070 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.958087 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:17Z","lastTransitionTime":"2025-12-03T17:40:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.975803 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:17 crc kubenswrapper[4687]: I1203 17:40:17.995446 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:17Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.012292 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.029500 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.054056 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.061021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.061090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.061169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.061222 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.061246 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.069832 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.085497 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.165044 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.165179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.165224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.165265 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.165291 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.269807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.269904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.269924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.269952 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.269970 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.373801 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.373872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.373897 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.373921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.373938 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.406739 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.406815 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:18 crc kubenswrapper[4687]: E1203 17:40:18.406956 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:18 crc kubenswrapper[4687]: E1203 17:40:18.407349 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.477833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.477903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.477943 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.477969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.477989 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.580964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.581042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.581052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.581067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.581078 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.684094 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.684243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.684268 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.684294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.684313 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.753608 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/2.log" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.759075 4687 scope.go:117] "RemoveContainer" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" Dec 03 17:40:18 crc kubenswrapper[4687]: E1203 17:40:18.759424 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.779096 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.784067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:18 crc kubenswrapper[4687]: E1203 17:40:18.784296 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:18 crc kubenswrapper[4687]: E1203 17:40:18.784442 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:40:34.784411508 +0000 UTC m=+67.675106971 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.786449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.786562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.786651 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.786738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.786774 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.798182 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.818312 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.832890 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.855901 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.883970 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.889833 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.889885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.889901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.889923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.889939 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.899656 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.913585 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.927279 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.942652 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.963601 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.978905 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.992459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.992505 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.992517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.992549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.992562 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:18Z","lastTransitionTime":"2025-12-03T17:40:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:18 crc kubenswrapper[4687]: I1203 17:40:18.995434 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:18Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.008673 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.021166 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.037740 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.048460 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.094689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.094732 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.094748 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.094762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.094772 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.197791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.197873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.197889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.198361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.198411 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.301929 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.301990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.302031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.302064 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.302086 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.404569 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.404615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.404630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.404649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.404660 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.407458 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.407492 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.407644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.407793 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.507536 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.507591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.507605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.507625 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.507639 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.581155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.581228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.581246 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.581269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.581285 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.600246 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.604414 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.604444 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.604453 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.604468 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.604480 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.618690 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.622051 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.622101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.622163 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.622187 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.622206 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.634719 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.638827 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.638881 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.638896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.638919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.638934 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.655062 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.659314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.659360 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.659410 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.659433 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.659479 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.673158 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:19Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:19 crc kubenswrapper[4687]: E1203 17:40:19.673411 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.675086 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.675201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.675228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.675263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.675286 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.778306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.778346 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.778357 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.778374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.778387 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.882107 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.882205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.882250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.882280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.882298 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.985772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.985856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.985880 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.985921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:19 crc kubenswrapper[4687]: I1203 17:40:19.985950 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:19Z","lastTransitionTime":"2025-12-03T17:40:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.090314 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.090381 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.090404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.090436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.090460 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.194894 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.194967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.194990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.195016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.195054 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.239532 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.255352 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.262295 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.277777 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.292325 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.298596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.298630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.298647 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.298663 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.298672 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.304397 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.317923 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.351478 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.369877 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.385989 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401487 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401673 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.401696 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.406609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.406620 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:20 crc kubenswrapper[4687]: E1203 17:40:20.406704 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:20 crc kubenswrapper[4687]: E1203 17:40:20.406806 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.415014 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.425612 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.437812 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.451714 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.470238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.486278 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510013 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510601 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.510709 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.533434 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:20Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.613509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.613594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.613621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.613653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.613676 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.717348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.717417 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.717437 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.717465 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.717483 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.820431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.820492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.820511 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.820538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.820557 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.924093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.924188 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.924210 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.924236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:20 crc kubenswrapper[4687]: I1203 17:40:20.924253 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:20Z","lastTransitionTime":"2025-12-03T17:40:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.027645 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.027733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.027757 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.027788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.027812 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.130551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.130606 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.130618 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.130637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.130648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.233819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.233884 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.233901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.233927 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.233947 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.336200 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.336242 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.336253 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.336268 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.336279 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.406486 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.406538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:21 crc kubenswrapper[4687]: E1203 17:40:21.406643 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:21 crc kubenswrapper[4687]: E1203 17:40:21.406768 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.439597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.439666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.439683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.439707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.439725 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.542724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.542791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.542812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.542842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.542866 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.646347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.646415 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.646428 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.646452 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.646471 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.749770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.749835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.749855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.749878 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.749894 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.851611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.851664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.851678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.851695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.851708 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.954793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.954857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.954873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.954898 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:21 crc kubenswrapper[4687]: I1203 17:40:21.954915 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:21Z","lastTransitionTime":"2025-12-03T17:40:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.057978 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.058038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.058054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.058082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.058098 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.161734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.161814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.161841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.161868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.161889 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.264234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.264292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.264307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.264323 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.264334 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.366624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.366675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.366696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.366718 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.366729 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.406553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.406553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:22 crc kubenswrapper[4687]: E1203 17:40:22.406746 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:22 crc kubenswrapper[4687]: E1203 17:40:22.406860 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.470189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.470244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.470255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.470277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.470290 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.574371 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.574443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.574461 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.574488 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.574508 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.677602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.677702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.677727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.677789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.677822 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.781151 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.781228 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.781251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.781281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.781307 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.885216 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.885284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.885307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.885337 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.885362 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.988762 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.988809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.988820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.988836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:22 crc kubenswrapper[4687]: I1203 17:40:22.988846 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:22Z","lastTransitionTime":"2025-12-03T17:40:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.091047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.091202 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.091413 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.091449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.091482 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.194540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.194615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.194639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.194668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.194689 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.296991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.297052 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.297072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.297096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.297114 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.398985 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.399049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.399064 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.399084 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.399097 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.407339 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:23 crc kubenswrapper[4687]: E1203 17:40:23.407470 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.407554 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:23 crc kubenswrapper[4687]: E1203 17:40:23.407786 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.501951 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.502013 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.502030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.502054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.502072 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.607207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.607258 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.607273 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.607296 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.607311 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.710666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.710750 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.710763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.710789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.710802 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.813649 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.813694 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.813707 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.813727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.813737 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.916627 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.916689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.916711 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.916745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:23 crc kubenswrapper[4687]: I1203 17:40:23.916766 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:23Z","lastTransitionTime":"2025-12-03T17:40:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.020426 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.020486 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.020506 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.020535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.020555 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.123520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.123896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.124032 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.124215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.124341 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.227705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.227771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.227793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.227822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.227846 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.331179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.331262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.331286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.331319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.331341 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.407093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.407212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:24 crc kubenswrapper[4687]: E1203 17:40:24.407285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:24 crc kubenswrapper[4687]: E1203 17:40:24.407384 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.434255 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.434321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.434343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.434374 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.434395 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.538103 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.538207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.538224 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.538251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.538268 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.641791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.641850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.641866 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.641890 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.641908 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.745256 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.745316 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.745331 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.745354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.745373 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.848581 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.848660 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.848685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.848715 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.848732 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.951664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.951791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.951813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.951837 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:24 crc kubenswrapper[4687]: I1203 17:40:24.951854 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:24Z","lastTransitionTime":"2025-12-03T17:40:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.055229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.055300 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.055318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.055344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.055361 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.158550 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.158622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.158646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.158675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.158695 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.262841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.262913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.262946 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.262977 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.262999 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.365413 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.365482 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.365502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.365535 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.365555 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.407453 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:25 crc kubenswrapper[4687]: E1203 17:40:25.407645 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.407739 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:25 crc kubenswrapper[4687]: E1203 17:40:25.407976 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.468594 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.468722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.468795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.468830 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.468896 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.571960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.572050 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.572174 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.572217 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.572259 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.677076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.677179 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.677198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.677227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.677247 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.779948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.780010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.780021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.780042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.780053 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.882478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.882552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.882590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.882624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.882648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.984961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.984994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.985003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.985015 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:25 crc kubenswrapper[4687]: I1203 17:40:25.985025 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:25Z","lastTransitionTime":"2025-12-03T17:40:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.087558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.087611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.087619 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.087637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.087648 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.190953 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.191039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.191065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.191095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.191155 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.294623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.294683 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.294699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.294722 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.294739 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.397066 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.397116 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.397148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.397166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.397178 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.406494 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.406518 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:26 crc kubenswrapper[4687]: E1203 17:40:26.406621 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:26 crc kubenswrapper[4687]: E1203 17:40:26.406825 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.500394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.500472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.500491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.500517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.500535 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.604575 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.604646 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.604664 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.604689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.604706 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.707729 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.707807 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.707830 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.707860 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.707878 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.810867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.810943 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.810968 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.810997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.811020 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.913761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.913846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.913872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.913905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:26 crc kubenswrapper[4687]: I1203 17:40:26.913929 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:26Z","lastTransitionTime":"2025-12-03T17:40:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.016994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.017060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.017079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.017105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.017170 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.119738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.119770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.119779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.119794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.119802 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.223436 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.223470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.223478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.223491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.223500 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.326229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.326285 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.326327 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.326352 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.326371 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.407212 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.407630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:27 crc kubenswrapper[4687]: E1203 17:40:27.407778 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:27 crc kubenswrapper[4687]: E1203 17:40:27.408077 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.423781 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.428992 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.429028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.429042 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.429059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.429073 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.436480 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.457333 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.474081 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.492142 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.515567 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.532156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.532189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.532198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.532213 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.532223 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.533714 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.550202 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.566521 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.581166 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.594177 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.609416 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.620727 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.634805 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.635745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.635798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.635814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.635838 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.635854 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.647958 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.671315 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.688997 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.705411 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:27Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.739232 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.739335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.739354 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.739380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.739402 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.843678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.843808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.844046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.844515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.844585 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.948074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.948558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.948625 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.948719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:27 crc kubenswrapper[4687]: I1203 17:40:27.948743 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:27Z","lastTransitionTime":"2025-12-03T17:40:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.052472 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.052552 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.052569 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.052592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.052612 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.156757 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.156823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.156841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.156868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.156888 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.259800 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.259852 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.259869 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.259896 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.259914 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.364047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.364164 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.364186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.364220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.364241 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.406751 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.406781 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:28 crc kubenswrapper[4687]: E1203 17:40:28.406943 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:28 crc kubenswrapper[4687]: E1203 17:40:28.407115 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.467496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.467550 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.467565 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.467588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.467603 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.570686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.570771 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.570788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.570809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.570868 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.673938 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.673979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.673988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.674002 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.674011 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.780317 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.780367 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.780380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.780401 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.780416 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.883542 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.883610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.883626 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.883654 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.883671 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.987277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.987343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.987365 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.987396 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:28 crc kubenswrapper[4687]: I1203 17:40:28.987419 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:28Z","lastTransitionTime":"2025-12-03T17:40:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.090208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.090257 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.090268 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.090286 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.090298 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.192990 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.193049 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.193067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.193091 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.193110 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.296247 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.296317 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.296335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.296362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.296381 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.399747 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.399844 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.399868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.399900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.399922 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.407279 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.407308 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.407435 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.407498 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.503040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.503090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.503101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.503135 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.503148 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.606479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.606520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.606529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.606545 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.606556 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.708778 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.708885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.708895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.708931 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.708941 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.747030 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.747061 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.747079 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.747097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.747107 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.759497 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:29Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.764444 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.764470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.764477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.764491 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.764516 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.777337 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:29Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.781856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.781891 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.781900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.781914 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.781923 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.794329 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:29Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.798695 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.798804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.798849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.798868 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.798881 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.813186 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:29Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.817218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.817260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.817271 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.817288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.817299 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.829153 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:29Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:29 crc kubenswrapper[4687]: E1203 17:40:29.829313 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.831093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.831136 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.831145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.831157 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.831167 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.933885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.933917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.933925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.933937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:29 crc kubenswrapper[4687]: I1203 17:40:29.933946 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:29Z","lastTransitionTime":"2025-12-03T17:40:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.036849 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.036906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.036923 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.036947 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.036964 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.139549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.139602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.139621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.139642 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.139659 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.242440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.242490 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.242501 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.242517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.242527 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.344961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.345017 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.345025 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.345040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.345049 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.406639 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.406732 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:30 crc kubenswrapper[4687]: E1203 17:40:30.406792 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:30 crc kubenswrapper[4687]: E1203 17:40:30.406916 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.450244 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.450284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.450292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.450306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.450316 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.552967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.553016 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.553026 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.553040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.553050 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.656598 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.656648 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.656659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.656675 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.656686 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.760068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.760114 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.760142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.760156 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.760164 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.862969 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.863023 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.863039 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.863062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.863079 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.966761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.966835 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.966853 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.966877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:30 crc kubenswrapper[4687]: I1203 17:40:30.966894 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:30Z","lastTransitionTime":"2025-12-03T17:40:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.069074 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.069138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.069154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.069173 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.069186 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.172161 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.172207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.172220 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.172240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.172253 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.275624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.275689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.275703 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.275730 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.275747 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.378438 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.378473 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.378484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.378499 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.378530 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.407268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.407330 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:31 crc kubenswrapper[4687]: E1203 17:40:31.407410 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:31 crc kubenswrapper[4687]: E1203 17:40:31.407522 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.480783 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.480858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.480870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.480886 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.481252 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.583602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.583635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.583644 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.583665 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.583675 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.685916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.685959 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.685970 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.685988 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.686000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.789072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.789138 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.789155 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.789174 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.789186 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.891964 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.892019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.892035 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.892055 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.892066 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.995028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.995068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.995078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.995093 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:31 crc kubenswrapper[4687]: I1203 17:40:31.995104 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:31Z","lastTransitionTime":"2025-12-03T17:40:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.097776 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.097822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.097832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.097848 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.097859 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.200509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.200555 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.200564 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.200579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.200592 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.303803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.303846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.303856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.303870 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.303881 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406408 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406567 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: E1203 17:40:32.406632 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.406721 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:32 crc kubenswrapper[4687]: E1203 17:40:32.406806 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.509089 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.509154 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.509167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.509186 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.509202 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.612236 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.612265 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.612274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.612288 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.612298 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.715447 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.715530 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.715538 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.715554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.715565 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.818470 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.818541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.818554 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.818580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.818605 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.922343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.922389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.922404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.922420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:32 crc kubenswrapper[4687]: I1203 17:40:32.922431 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:32Z","lastTransitionTime":"2025-12-03T17:40:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.025113 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.025167 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.025176 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.025192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.025202 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.127595 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.127637 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.127648 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.127666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.127678 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.229821 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.229863 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.229901 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.229919 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.229930 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.332621 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.332661 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.332670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.332684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.332693 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.406588 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.406619 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:33 crc kubenswrapper[4687]: E1203 17:40:33.406736 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:33 crc kubenswrapper[4687]: E1203 17:40:33.407013 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.436287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.436361 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.436389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.436411 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.436428 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.538611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.538670 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.538685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.538706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.538723 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.641525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.641590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.641603 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.641624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.641635 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.744744 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.744785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.744796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.744814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.744825 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.847234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.847541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.847653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.847764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.847860 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.950449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.950504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.950517 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.950540 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:33 crc kubenswrapper[4687]: I1203 17:40:33.950553 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:33Z","lastTransitionTime":"2025-12-03T17:40:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.053153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.053196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.053223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.053240 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.053250 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.155448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.155492 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.155504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.155522 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.155532 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.258525 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.258829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.258907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.258982 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.259049 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.361440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.361706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.361781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.361855 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.361930 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.406836 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:34 crc kubenswrapper[4687]: E1203 17:40:34.406978 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.406851 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:34 crc kubenswrapper[4687]: E1203 17:40:34.407409 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.407658 4687 scope.go:117] "RemoveContainer" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" Dec 03 17:40:34 crc kubenswrapper[4687]: E1203 17:40:34.407906 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.464021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.464062 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.464070 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.464082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.464090 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.567375 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.567420 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.567430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.567451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.567466 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.670994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.671175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.671311 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.671406 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.671483 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.775813 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.775882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.775894 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.775920 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.775940 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.879913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.880487 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.880590 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.880691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.880761 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.882035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:34 crc kubenswrapper[4687]: E1203 17:40:34.882373 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:34 crc kubenswrapper[4687]: E1203 17:40:34.882565 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:41:06.882527678 +0000 UTC m=+99.773223321 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.984223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.984274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.984284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.984307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:34 crc kubenswrapper[4687]: I1203 17:40:34.984319 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:34Z","lastTransitionTime":"2025-12-03T17:40:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.087234 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.087292 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.087304 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.087321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.087332 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.189586 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.189667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.189677 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.189698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.189710 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.292809 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.292904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.292921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.292981 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.293000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.396231 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.396507 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.396567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.396629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.396688 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.406632 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.406732 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:35 crc kubenswrapper[4687]: E1203 17:40:35.407361 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:35 crc kubenswrapper[4687]: E1203 17:40:35.407561 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.499691 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.499749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.499766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.499790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.499805 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.601871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.601925 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.601933 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.601948 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.601956 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.703906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.704190 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.704278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.704383 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.704478 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.807448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.808515 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.808786 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.809332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.809503 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.844786 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/0.log" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.844916 4687 generic.go:334] "Generic (PLEG): container finished" podID="ede1a722-2df8-433e-b8be-82c434be7d02" containerID="261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123" exitCode=1 Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.844975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerDied","Data":"261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.845786 4687 scope.go:117] "RemoveContainer" containerID="261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.858711 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:35Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.898361 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:35Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.919358 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.919915 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.919936 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.919963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.919988 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:35Z","lastTransitionTime":"2025-12-03T17:40:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.942779 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:35Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.964376 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:35Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:35 crc kubenswrapper[4687]: I1203 17:40:35.988763 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:35Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.010186 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022417 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022907 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.022935 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.038162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.051530 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"2025-12-03T17:39:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b\\\\n2025-12-03T17:39:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b to /host/opt/cni/bin/\\\\n2025-12-03T17:39:50Z [verbose] multus-daemon started\\\\n2025-12-03T17:39:50Z [verbose] Readiness Indicator file check\\\\n2025-12-03T17:40:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.064342 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.081753 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.097330 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.112481 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.125489 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.125541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.125551 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.125569 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.125583 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.127622 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.146971 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.163314 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.177699 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.193447 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.228424 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.228467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.228479 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.228493 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.228504 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.332449 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.332549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.332570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.332600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.332619 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.406483 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:36 crc kubenswrapper[4687]: E1203 17:40:36.406644 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.407144 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:36 crc kubenswrapper[4687]: E1203 17:40:36.407453 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.436912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.436986 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.437070 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.437109 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.437178 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.540544 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.540600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.540613 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.540635 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.540649 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.643273 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.643322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.643335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.643353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.643367 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.746204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.746251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.746263 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.746282 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.746296 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.849267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.849343 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.849373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.849409 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.849449 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.851944 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/0.log" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.852041 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerStarted","Data":"d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.872052 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.886455 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.919225 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.933910 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.954039 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.954573 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.954789 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.955274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.955447 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.955613 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:36Z","lastTransitionTime":"2025-12-03T17:40:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.974406 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:36 crc kubenswrapper[4687]: I1203 17:40:36.994220 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:36Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.015076 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.036528 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"2025-12-03T17:39:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b\\\\n2025-12-03T17:39:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b to /host/opt/cni/bin/\\\\n2025-12-03T17:39:50Z [verbose] multus-daemon started\\\\n2025-12-03T17:39:50Z [verbose] Readiness Indicator file check\\\\n2025-12-03T17:40:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.054928 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.059463 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.059537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.059556 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.059591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.059614 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.071603 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.090520 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.105494 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.125262 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.141824 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.162864 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.162902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.162913 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.162928 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.162937 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.172243 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.188030 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.202446 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.265500 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.265562 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.265580 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.265605 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.265622 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.368763 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.368820 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.368832 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.368857 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.368870 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.406733 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.406812 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:37 crc kubenswrapper[4687]: E1203 17:40:37.406923 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:37 crc kubenswrapper[4687]: E1203 17:40:37.407235 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.424626 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.441671 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.463204 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"2025-12-03T17:39:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b\\\\n2025-12-03T17:39:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b to /host/opt/cni/bin/\\\\n2025-12-03T17:39:50Z [verbose] multus-daemon started\\\\n2025-12-03T17:39:50Z [verbose] Readiness Indicator file check\\\\n2025-12-03T17:40:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.471738 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.471812 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.471830 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.471861 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.471884 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.475512 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.490242 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.504974 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.524589 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.536376 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.555280 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.570990 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.575024 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.575058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.575072 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.575097 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.575133 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.589414 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.604636 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.621674 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.637036 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.660238 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.672376 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.678137 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.678190 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.678205 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.678227 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.678242 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.690002 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.704183 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:37Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.781196 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.781250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.781260 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.781281 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.781292 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.884192 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.884249 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.884261 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.884284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.884298 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.987197 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.987243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.987254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.987269 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:37 crc kubenswrapper[4687]: I1203 17:40:37.987280 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:37Z","lastTransitionTime":"2025-12-03T17:40:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.089503 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.089761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.089829 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.089888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.089942 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.193672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.193937 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.194001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.194096 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.194191 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.297596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.297652 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.297666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.297686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.297699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.401440 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.401497 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.401512 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.401537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.401553 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.406749 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.406751 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:38 crc kubenswrapper[4687]: E1203 17:40:38.406885 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:38 crc kubenswrapper[4687]: E1203 17:40:38.407036 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.504373 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.504430 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.504441 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.504464 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.504483 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.607448 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.607705 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.607828 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.607917 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.608000 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.711702 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.711782 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.711803 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.711839 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.711861 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.815146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.815206 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.815218 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.815237 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.815249 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.918545 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.918596 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.918607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.918624 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:38 crc kubenswrapper[4687]: I1203 17:40:38.918640 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:38Z","lastTransitionTime":"2025-12-03T17:40:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.022146 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.022182 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.022191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.022207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.022220 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.124818 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.124873 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.124889 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.124932 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.124946 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.227960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.228018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.228031 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.228054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.228067 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.331541 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.331629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.331647 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.331677 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.331699 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.406265 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:39 crc kubenswrapper[4687]: E1203 17:40:39.406424 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.406277 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:39 crc kubenswrapper[4687]: E1203 17:40:39.406663 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.440823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.440885 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.440900 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.440921 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.440936 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.543555 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.543611 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.543622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.543641 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.543654 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.645791 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.645830 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.645842 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.645859 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.645871 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.749161 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.749207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.749214 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.749229 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.749238 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.852871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.852916 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.852954 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.852976 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.852993 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.955529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.955570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.955578 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.955591 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.955600 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.979912 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.979960 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.979972 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.979987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.979999 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:39 crc kubenswrapper[4687]: E1203 17:40:39.994089 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:39Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.998199 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.998247 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.998262 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.998284 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:39 crc kubenswrapper[4687]: I1203 17:40:39.998299 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:39Z","lastTransitionTime":"2025-12-03T17:40:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.014681 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:40Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.018223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.018295 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.018322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.018353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.018376 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.030110 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:40Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.033719 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.033760 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.033776 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.033796 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.033809 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.049952 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:40Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.053071 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.053111 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.053141 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.053160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.053172 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.066422 4687 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ee1562dd-e220-43f1-83b5-a41fc656114f\\\",\\\"systemUUID\\\":\\\"07bf91f7-6553-4869-9d97-b90a2ed5644f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:40Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.066536 4687 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.067728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.067759 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.067770 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.067785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.067794 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.170388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.170467 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.170478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.170502 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.170517 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.273834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.273872 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.273881 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.273895 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.273905 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.376725 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.376775 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.376787 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.376808 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.376821 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.407105 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.407237 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.407309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:40 crc kubenswrapper[4687]: E1203 17:40:40.407461 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.479684 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.479724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.479734 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.479751 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.479760 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.583280 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.583345 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.583362 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.583387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.583404 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.686198 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.686251 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.686264 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.686283 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.686301 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.789219 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.789287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.789305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.789329 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.789347 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.891712 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.891779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.891793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.891814 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.891829 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.995389 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.995431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.995442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.995457 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:40 crc kubenswrapper[4687]: I1203 17:40:40.995467 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:40Z","lastTransitionTime":"2025-12-03T17:40:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.098344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.098394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.098410 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.098431 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.098446 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.201666 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.201723 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.201736 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.201758 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.201772 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.305443 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.305887 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.305974 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.306060 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.306189 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.406606 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.406696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:41 crc kubenswrapper[4687]: E1203 17:40:41.407559 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:41 crc kubenswrapper[4687]: E1203 17:40:41.407701 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.409058 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.409278 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.409378 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.409478 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.409559 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.512963 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.513579 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.513888 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.514003 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.514102 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.617520 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.617915 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.617991 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.618073 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.618209 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.721001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.721047 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.721059 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.721085 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.721099 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.824993 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.825046 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.825056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.825078 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.825096 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.928788 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.929248 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.929494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.929636 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:41 crc kubenswrapper[4687]: I1203 17:40:41.929771 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:41Z","lastTransitionTime":"2025-12-03T17:40:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.033238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.033330 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.033344 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.033366 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.033378 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.136523 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.136585 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.136597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.136622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.136634 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.240680 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.240749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.240761 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.240785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.240799 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.344028 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.344082 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.344095 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.344142 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.344164 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.407110 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.407192 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:42 crc kubenswrapper[4687]: E1203 17:40:42.407400 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:42 crc kubenswrapper[4687]: E1203 17:40:42.407592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.446741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.446779 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.446793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.446815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.446828 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.549494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.549559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.549577 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.549604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.549658 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.652751 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.652793 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.652831 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.652850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.652859 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.756007 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.756056 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.756068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.756083 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.756096 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.858620 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.858687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.858724 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.858756 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.858778 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.961823 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.961882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.961899 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.961922 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:42 crc kubenswrapper[4687]: I1203 17:40:42.961939 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:42Z","lastTransitionTime":"2025-12-03T17:40:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.064822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.064911 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.064935 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.064967 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.064989 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.168243 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.168402 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.168427 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.168460 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.168486 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.271781 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.271850 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.271867 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.271892 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.271910 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.376733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.376840 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.376858 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.376882 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.376902 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.406385 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.406473 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:43 crc kubenswrapper[4687]: E1203 17:40:43.406568 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:43 crc kubenswrapper[4687]: E1203 17:40:43.406815 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.480790 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.480834 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.480846 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.480865 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.480877 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.583769 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.583852 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.583877 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.583909 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.583933 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.686600 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.686668 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.686689 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.686713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.686764 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.789699 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.789745 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.789772 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.789795 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.789813 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.892462 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.892510 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.892526 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.892547 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.892562 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.995893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.996597 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.996766 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.996924 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:43 crc kubenswrapper[4687]: I1203 17:40:43.997068 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:43Z","lastTransitionTime":"2025-12-03T17:40:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.100475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.100558 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.100588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.100615 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.100631 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.203010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.203068 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.203090 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.203152 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.203178 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.306388 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.306533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.306559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.306592 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.306612 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.406497 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.406570 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:44 crc kubenswrapper[4687]: E1203 17:40:44.406909 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:44 crc kubenswrapper[4687]: E1203 17:40:44.407106 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.409207 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.409265 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.409287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.409318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.409341 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.421743 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.513356 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.513819 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.514018 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.514307 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.514522 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.617177 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.617484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.617576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.617678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.617771 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.720274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.720588 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.720827 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.720994 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.721160 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.824250 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.824306 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.824318 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.824335 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.824348 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.927658 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.927741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.927765 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.927792 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:44 crc kubenswrapper[4687]: I1203 17:40:44.927814 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:44Z","lastTransitionTime":"2025-12-03T17:40:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.031241 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.031315 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.031338 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.031370 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.031393 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.136764 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.136815 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.136826 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.137204 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.137260 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.240353 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.240394 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.240406 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.240422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.240434 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.344223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.344266 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.344276 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.344293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.344307 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.407244 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:45 crc kubenswrapper[4687]: E1203 17:40:45.407762 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.407377 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:45 crc kubenswrapper[4687]: E1203 17:40:45.408275 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.446238 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.446297 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.446305 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.446319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.446346 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.549054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.549100 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.549112 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.549153 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.549167 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.653057 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.653148 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.653166 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.653191 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.653212 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.755571 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.755607 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.755617 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.755630 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.755639 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.858741 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.858783 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.858794 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.858810 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.858822 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.960904 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.960966 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.960987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.961010 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:45 crc kubenswrapper[4687]: I1203 17:40:45.961025 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:45Z","lastTransitionTime":"2025-12-03T17:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.063623 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.063686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.063704 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.063728 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.063740 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.166350 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.166926 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.167019 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.167145 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.167251 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.270321 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.270403 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.270422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.270452 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.270474 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.373717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.373785 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.373798 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.373822 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.373835 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.407203 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.407252 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:46 crc kubenswrapper[4687]: E1203 17:40:46.407454 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:46 crc kubenswrapper[4687]: E1203 17:40:46.407600 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.477499 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.477556 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.477570 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.477593 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.477607 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.581390 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.581459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.581477 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.581504 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.581522 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.692409 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.692459 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.692474 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.692494 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.692510 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.795997 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.796081 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.796101 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.796169 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.796194 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.899105 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.899194 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.899209 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.899226 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:46 crc kubenswrapper[4687]: I1203 17:40:46.899236 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:46Z","lastTransitionTime":"2025-12-03T17:40:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.002484 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.002533 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.002548 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.002567 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.002580 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.105871 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.105961 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.105979 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.106006 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.106026 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.210001 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.210905 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.211211 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.211915 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.212206 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.315610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.315698 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.315720 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.315749 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.315766 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.407113 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:47 crc kubenswrapper[4687]: E1203 17:40:47.407478 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.407868 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:47 crc kubenswrapper[4687]: E1203 17:40:47.408622 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.409338 4687 scope.go:117] "RemoveContainer" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.419576 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.419629 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.419643 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.419708 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.419730 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.432769 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.449559 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.468047 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.487432 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.514006 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.522380 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.522418 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.522426 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.522442 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.522451 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.534862 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.563686 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.579609 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.596208 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.615460 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb2fcd5-df8c-4075-9c65-a5726186f3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1b23ac891a3309f9be744c6c6414a34089909552a015c66f530fb14fbe5646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.625446 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.625496 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.625513 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.625534 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.625548 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.629805 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.646331 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.676331 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.690187 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.703712 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.717633 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.727987 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.728040 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.728054 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.728076 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.728090 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.733413 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.747285 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.762162 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"2025-12-03T17:39:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b\\\\n2025-12-03T17:39:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b to /host/opt/cni/bin/\\\\n2025-12-03T17:39:50Z [verbose] multus-daemon started\\\\n2025-12-03T17:39:50Z [verbose] Readiness Indicator file check\\\\n2025-12-03T17:40:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:47Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.831559 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.831616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.831639 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.831669 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.831692 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.935475 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.935537 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.935549 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.935574 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:47 crc kubenswrapper[4687]: I1203 17:40:47.935590 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:47Z","lastTransitionTime":"2025-12-03T17:40:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.039201 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.039274 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.039287 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.039310 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.039331 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.143653 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.143706 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.143717 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.143733 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.143746 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.246957 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.247011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.247021 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.247038 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.247049 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.350945 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.351041 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.351067 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.351102 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.351165 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.406767 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:48 crc kubenswrapper[4687]: E1203 17:40:48.406916 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.406766 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:48 crc kubenswrapper[4687]: E1203 17:40:48.407239 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.454334 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.454377 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.454387 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.454404 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.454416 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.557836 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.557893 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.557906 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.557928 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.557941 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.660902 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.660971 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.660989 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.661011 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.661029 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.764223 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.764267 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.764275 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.764293 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.764303 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.867481 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.867841 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.867856 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.867875 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.867888 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.900050 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/2.log" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.903563 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.904054 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.915579 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cb2fcd5-df8c-4075-9c65-a5726186f3ce\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1b23ac891a3309f9be744c6c6414a34089909552a015c66f530fb14fbe5646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c4bb067d93092c680e0f6c68d9ac832c7ecab7b29ef324938f939d0a5843d46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.924871 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d3653c4056619fce33d6af638b5fa44fab29f2c05577a043c75cdfdbaff0b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.934897 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fab93456-303f-4c39-93a9-f52dcab12ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f51cab61210dda7f17b467a11ac7806717af0a83574859089d29501dc51e3001\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dw6mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gz2wq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.952692 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7fe22da-1ea3-49ba-b2c6-851ff064db76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:17Z\\\",\\\"message\\\":\\\"1.Node event handler 2 for removal\\\\nI1203 17:40:17.355060 6351 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355148 6351 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355186 6351 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1203 17:40:17.355198 6351 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1203 17:40:17.355212 6351 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1203 17:40:17.355234 6351 obj_retry.go:420] Function iterateRetryResources for *v1.Pod ended (in 338.559µs)\\\\nI1203 17:40:17.355271 6351 handler.go:208] Removed *v1.Node event handler 2\\\\nI1203 17:40:17.355373 6351 factory.go:656] Stopping watch factory\\\\nI1203 17:40:17.355416 6351 ovnkube.go:599] Stopped ovnkube\\\\nI1203 17:40:17.355482 6351 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1203 17:40:17.355608 6351 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:40:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6kjw5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-668q2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.965768 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kbjvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ede1a722-2df8-433e-b8be-82c434be7d02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T17:40:35Z\\\",\\\"message\\\":\\\"2025-12-03T17:39:50+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b\\\\n2025-12-03T17:39:50+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d9934679-472c-4819-a3f8-d0da55617e5b to /host/opt/cni/bin/\\\\n2025-12-03T17:39:50Z [verbose] multus-daemon started\\\\n2025-12-03T17:39:50Z [verbose] Readiness Indicator file check\\\\n2025-12-03T17:40:35Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4wnj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-kbjvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.970294 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.970322 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.970332 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.970347 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.970357 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:48Z","lastTransitionTime":"2025-12-03T17:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.978411 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7bvc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb6870b7-890e-4352-b873-f6676b3315bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b9151fc7bf227b91708f3e1c79dba819c336e12b3d9647dac9d13fdc6afa8fbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l8nb8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7bvc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:48 crc kubenswrapper[4687]: I1203 17:40:48.991584 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca4f3468-e2b6-472c-aad7-4abac17484f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6104d032f2f5a5edea7f142e6d16aff8e59f19bc0b09d1c4b91065391ac763ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c73a83d60d6c5fae529ddf9737df120db3f6a19415c94c7487e7c504426ed41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41a571ebf82b28042da67a71536b158bcff98a39ffcf654a3bc863731c6922ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:48Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.005005 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6587599f-4dc2-4ad2-9a44-2453eae89243\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1203 17:39:39.827778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1203 17:39:39.828859 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1074068367/tls.crt::/tmp/serving-cert-1074068367/tls.key\\\\\\\"\\\\nI1203 17:39:45.288153 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1203 17:39:45.292601 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1203 17:39:45.292754 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1203 17:39:45.292854 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1203 17:39:45.292933 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1203 17:39:45.303291 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1203 17:39:45.303324 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303331 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1203 17:39:45.303338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1203 17:39:45.303343 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1203 17:39:45.303348 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1203 17:39:45.303353 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1203 17:39:45.303487 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1203 17:39:45.309093 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.020183 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.033673 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.047990 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eb80768-2a1e-4632-8f1f-453cce62fd5f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24f7941a86a072de2d2a7e21dc7267176452bdf33763a50d172279d42e2597a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c8421dd77ce01241394a9fdd60b6b9134f58fb7c18a61baed3e38bc88775a3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d5f69f17ebcfe1182dbd3ca3b14ab41fe477b5a36f463da8ae31932ffacc175\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89f404078d85861890caaf282b9ccda7552392a491b8f7cb924324068b84ca8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3761060bd8aa1bbb14afa4434015884fbee270e06d288b69350b0d91fcf725f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ce38c6e6a8912070949588097a86f323c428be9809806f87f80ddd844bf9db7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccff50549b138d225d64de7220cf1ac586d7a0dfe0c07c0086d175d34cea0a29\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hdqfw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-hrqh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.061066 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ac2b71b85a2b050adae308eb61bd68e3e18d4dbe860b5938bb626fe2038afa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.073398 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.073435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.073445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.073461 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.073470 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.076709 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.089464 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08eca747d9286726572d76719745c2300cc01d98f352eb5433c0c902f1e04bab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aba3ee8cc821c2bd2232bf9651f5a9633b180b3f921eb0f616e5499bc86b2ad6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.099722 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-hhb6c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2458ef0-c3e4-4bb4-9698-92445412cca7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://233ec05c0c32b5fc0e017c5bbd41b4b2574a2ed9e503b332a6a8dfaa682576cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cs274\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-hhb6c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.114394 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cef0b114-8148-4072-a2df-80a1497e344d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3fa4cc1bb33184c2f361e06794c4e72232384768d410edad74a356209aea66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4476b45459ca2a59bddf09fe3cd6919bb80f10f388c32ffd12129506f24fba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae0950bd389d58f692936a9eb8c880a239a7eff1d205c71318f07df98e5f8b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61c2bf157c0a52c8993c44d44529e7e62dab0f77f57e12f6abaa8470690f1180\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.138040 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8996cc11-df63-4967-87cf-9232262848d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:39:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1e9d65c2688ed832e11c22fbaeb45787d6d84138054f0ad03808ddba6b9a80d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b15cfcb2458339f33469491eeb56ebcc05b14de5594c33609285a71b2dea2c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a7333548c1c1536997c824c3e18a19e882ced5fc80a6cd352f8babbc2d4e4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8bdac07419b026b416f9009dc03fff45167b129242ff4c11610221f269a37c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://002b9578c602375cb26f945d36d04dca6d61f6776f715276dbc4ace4d21a8087\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:39:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a526f3fe328bb799b8b09d967eb3c9a4445e5d6bd808cd093d4c166b4bd6ced\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://873d103c98e218ad1246d3e9cb4bf70b9fde1a716341cbbde6b1685d46e8e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:28Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc30c095ff91a354642cdd6e8ffff43a84e83c5c66bece6b1289292817cf4b2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T17:39:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T17:39:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:39:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.152676 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f507ce27-2982-4592-a5d5-f7b78e85363a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42f9ca79d02413a7a6cd84f18e082605c694c8672129149239cab684b1d2f3f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28971e75bfb0b561e1f29e108d749e260d10ba6fb8cff48a93068c6ecc7fc6e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T17:40:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwc95\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nkgnp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.166633 4687 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-w8876" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c067216-97d2-43a1-a8a6-5719153b3c61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T17:40:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppzdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-03T17:40:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-w8876\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-03T17:40:49Z is after 2025-08-24T17:21:41Z" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.176686 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.176804 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.176825 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.176854 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.176874 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.279687 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.279727 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.279735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.279751 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.279761 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.348251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.348439 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:53.348415644 +0000 UTC m=+146.239111097 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.348541 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.348587 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.348691 4687 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.348724 4687 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.348767 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:41:53.348749392 +0000 UTC m=+146.239444865 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.348793 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-03 17:41:53.348781183 +0000 UTC m=+146.239476646 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.382208 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.382289 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.382319 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.382348 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.382372 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.406957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.407058 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.407217 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.407517 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.449459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.449526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449720 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449735 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449747 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449759 4687 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449765 4687 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449769 4687 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449815 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-03 17:41:53.449798923 +0000 UTC m=+146.340494356 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.449830 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-03 17:41:53.449823873 +0000 UTC m=+146.340519306 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.485622 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.485667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.485678 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.485697 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.485709 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.588602 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.588659 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.588672 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.588696 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.588711 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.691175 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.691254 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.691277 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.691309 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.691331 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.794376 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.794422 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.794435 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.794451 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.794462 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.897553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.897604 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.897616 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.897634 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.897646 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.910559 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/3.log" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.911591 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/2.log" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.914810 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" exitCode=1 Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.914850 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.914894 4687 scope.go:117] "RemoveContainer" containerID="4d20e197f81d6b319211c21567da6331b13f32b36c935272509d90dbed517c00" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.915727 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:40:49 crc kubenswrapper[4687]: E1203 17:40:49.915963 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.957323 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=64.957306525 podStartE2EDuration="1m4.957306525s" podCreationTimestamp="2025-12-03 17:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:49.956972527 +0000 UTC m=+82.847668000" watchObservedRunningTime="2025-12-03 17:40:49.957306525 +0000 UTC m=+82.848001958" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.957463 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=62.957456589 podStartE2EDuration="1m2.957456589s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:49.940146759 +0000 UTC m=+82.830842212" watchObservedRunningTime="2025-12-03 17:40:49.957456589 +0000 UTC m=+82.848152022" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.999682 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.999713 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.999721 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.999735 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:49 crc kubenswrapper[4687]: I1203 17:40:49.999745 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:49Z","lastTransitionTime":"2025-12-03T17:40:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.016573 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7bvc5" podStartSLOduration=63.016552567 podStartE2EDuration="1m3.016552567s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.016458255 +0000 UTC m=+82.907153688" watchObservedRunningTime="2025-12-03 17:40:50.016552567 +0000 UTC m=+82.907248000" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.016854 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kbjvs" podStartSLOduration=62.016847505 podStartE2EDuration="1m2.016847505s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.005401407 +0000 UTC m=+82.896096840" watchObservedRunningTime="2025-12-03 17:40:50.016847505 +0000 UTC m=+82.907542938" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.079472 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-hhb6c" podStartSLOduration=63.079450135 podStartE2EDuration="1m3.079450135s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.079322372 +0000 UTC m=+82.970017805" watchObservedRunningTime="2025-12-03 17:40:50.079450135 +0000 UTC m=+82.970145568" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.094830 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-hrqh4" podStartSLOduration=62.094811975 podStartE2EDuration="1m2.094811975s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.094805475 +0000 UTC m=+82.985500908" watchObservedRunningTime="2025-12-03 17:40:50.094811975 +0000 UTC m=+82.985507418" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.102099 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.102160 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.102172 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.102189 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.102200 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:50Z","lastTransitionTime":"2025-12-03T17:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.136639 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=61.136609273 podStartE2EDuration="1m1.136609273s" podCreationTimestamp="2025-12-03 17:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.135108414 +0000 UTC m=+83.025803857" watchObservedRunningTime="2025-12-03 17:40:50.136609273 +0000 UTC m=+83.027304726" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.137173 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=30.137164867 podStartE2EDuration="30.137164867s" podCreationTimestamp="2025-12-03 17:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.107402783 +0000 UTC m=+82.998098216" watchObservedRunningTime="2025-12-03 17:40:50.137164867 +0000 UTC m=+83.027860310" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.149820 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nkgnp" podStartSLOduration=62.149803186 podStartE2EDuration="1m2.149803186s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.149630202 +0000 UTC m=+83.040325635" watchObservedRunningTime="2025-12-03 17:40:50.149803186 +0000 UTC m=+83.040498629" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.188679 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=6.188655008 podStartE2EDuration="6.188655008s" podCreationTimestamp="2025-12-03 17:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.176315156 +0000 UTC m=+83.067010589" watchObservedRunningTime="2025-12-03 17:40:50.188655008 +0000 UTC m=+83.079350451" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.199990 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podStartSLOduration=62.199974902 podStartE2EDuration="1m2.199974902s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.199327746 +0000 UTC m=+83.090023199" watchObservedRunningTime="2025-12-03 17:40:50.199974902 +0000 UTC m=+83.090670355" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.204610 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.204656 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.204667 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.204685 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.204698 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:50Z","lastTransitionTime":"2025-12-03T17:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.308065 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.308180 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.308197 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.308215 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.308227 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:50Z","lastTransitionTime":"2025-12-03T17:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.333445 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.333509 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.333529 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.333553 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.333572 4687 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T17:40:50Z","lastTransitionTime":"2025-12-03T17:40:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.406750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.406875 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:50 crc kubenswrapper[4687]: E1203 17:40:50.406952 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:50 crc kubenswrapper[4687]: E1203 17:40:50.407151 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.417551 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl"] Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.418392 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.420836 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.421618 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.421627 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.422100 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.460948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.461008 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ce29685-f002-411e-918f-145a91f0de5a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.461043 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ce29685-f002-411e-918f-145a91f0de5a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.461079 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.461264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce29685-f002-411e-918f-145a91f0de5a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ce29685-f002-411e-918f-145a91f0de5a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce29685-f002-411e-918f-145a91f0de5a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562335 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ce29685-f002-411e-918f-145a91f0de5a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562361 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.562438 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0ce29685-f002-411e-918f-145a91f0de5a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.564026 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ce29685-f002-411e-918f-145a91f0de5a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.569540 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ce29685-f002-411e-918f-145a91f0de5a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.582580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ce29685-f002-411e-918f-145a91f0de5a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-s5nhl\" (UID: \"0ce29685-f002-411e-918f-145a91f0de5a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.732082 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.920114 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/3.log" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.924231 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.924362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" event={"ID":"0ce29685-f002-411e-918f-145a91f0de5a","Type":"ContainerStarted","Data":"1172b5bb1180c6cf5044f5b67d4bd543f0a6360d1cb09e9b85f10130bea04778"} Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.924422 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" event={"ID":"0ce29685-f002-411e-918f-145a91f0de5a","Type":"ContainerStarted","Data":"0b18e2db55cd1d7f7b57fc07ad2534d8cb9bfd865016fd92e3a66b44213930c8"} Dec 03 17:40:50 crc kubenswrapper[4687]: E1203 17:40:50.924435 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:40:50 crc kubenswrapper[4687]: I1203 17:40:50.960635 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-s5nhl" podStartSLOduration=63.960610724 podStartE2EDuration="1m3.960610724s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:40:50.960010959 +0000 UTC m=+83.850706392" watchObservedRunningTime="2025-12-03 17:40:50.960610724 +0000 UTC m=+83.851306157" Dec 03 17:40:51 crc kubenswrapper[4687]: I1203 17:40:51.407357 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:51 crc kubenswrapper[4687]: I1203 17:40:51.407506 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:51 crc kubenswrapper[4687]: E1203 17:40:51.408023 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:51 crc kubenswrapper[4687]: E1203 17:40:51.408196 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:52 crc kubenswrapper[4687]: I1203 17:40:52.406985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:52 crc kubenswrapper[4687]: I1203 17:40:52.407470 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:52 crc kubenswrapper[4687]: E1203 17:40:52.407758 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:52 crc kubenswrapper[4687]: E1203 17:40:52.407590 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:53 crc kubenswrapper[4687]: I1203 17:40:53.407039 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:53 crc kubenswrapper[4687]: E1203 17:40:53.408016 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:53 crc kubenswrapper[4687]: I1203 17:40:53.407262 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:53 crc kubenswrapper[4687]: E1203 17:40:53.408989 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:54 crc kubenswrapper[4687]: I1203 17:40:54.406814 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:54 crc kubenswrapper[4687]: I1203 17:40:54.406885 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:54 crc kubenswrapper[4687]: E1203 17:40:54.406924 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:54 crc kubenswrapper[4687]: E1203 17:40:54.407012 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:55 crc kubenswrapper[4687]: I1203 17:40:55.406787 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:55 crc kubenswrapper[4687]: I1203 17:40:55.406870 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:55 crc kubenswrapper[4687]: E1203 17:40:55.407013 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:55 crc kubenswrapper[4687]: E1203 17:40:55.407522 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:56 crc kubenswrapper[4687]: I1203 17:40:56.407157 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:56 crc kubenswrapper[4687]: I1203 17:40:56.407249 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:56 crc kubenswrapper[4687]: E1203 17:40:56.407308 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:56 crc kubenswrapper[4687]: E1203 17:40:56.407520 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:57 crc kubenswrapper[4687]: I1203 17:40:57.406759 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:57 crc kubenswrapper[4687]: I1203 17:40:57.406887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:57 crc kubenswrapper[4687]: E1203 17:40:57.407896 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:57 crc kubenswrapper[4687]: E1203 17:40:57.408734 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:40:58 crc kubenswrapper[4687]: I1203 17:40:58.406629 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:40:58 crc kubenswrapper[4687]: E1203 17:40:58.406755 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:40:58 crc kubenswrapper[4687]: I1203 17:40:58.407041 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:40:58 crc kubenswrapper[4687]: E1203 17:40:58.407265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:40:59 crc kubenswrapper[4687]: I1203 17:40:59.407223 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:40:59 crc kubenswrapper[4687]: I1203 17:40:59.407222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:40:59 crc kubenswrapper[4687]: E1203 17:40:59.407412 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:40:59 crc kubenswrapper[4687]: E1203 17:40:59.407445 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:00 crc kubenswrapper[4687]: I1203 17:41:00.406252 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:00 crc kubenswrapper[4687]: I1203 17:41:00.406351 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:00 crc kubenswrapper[4687]: E1203 17:41:00.406464 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:00 crc kubenswrapper[4687]: E1203 17:41:00.406597 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:01 crc kubenswrapper[4687]: I1203 17:41:01.406922 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:01 crc kubenswrapper[4687]: I1203 17:41:01.407731 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:01 crc kubenswrapper[4687]: E1203 17:41:01.407884 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:01 crc kubenswrapper[4687]: E1203 17:41:01.408343 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:02 crc kubenswrapper[4687]: I1203 17:41:02.407311 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:02 crc kubenswrapper[4687]: I1203 17:41:02.407599 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:02 crc kubenswrapper[4687]: E1203 17:41:02.407814 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:02 crc kubenswrapper[4687]: E1203 17:41:02.408032 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:03 crc kubenswrapper[4687]: I1203 17:41:03.406933 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:03 crc kubenswrapper[4687]: E1203 17:41:03.407025 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:03 crc kubenswrapper[4687]: I1203 17:41:03.407145 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:03 crc kubenswrapper[4687]: E1203 17:41:03.407328 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:04 crc kubenswrapper[4687]: I1203 17:41:04.406324 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:04 crc kubenswrapper[4687]: I1203 17:41:04.406324 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:04 crc kubenswrapper[4687]: E1203 17:41:04.406508 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:04 crc kubenswrapper[4687]: E1203 17:41:04.407113 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:04 crc kubenswrapper[4687]: I1203 17:41:04.408863 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:41:04 crc kubenswrapper[4687]: E1203 17:41:04.409233 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:41:05 crc kubenswrapper[4687]: I1203 17:41:05.407718 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:05 crc kubenswrapper[4687]: E1203 17:41:05.407945 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:05 crc kubenswrapper[4687]: I1203 17:41:05.408075 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:05 crc kubenswrapper[4687]: E1203 17:41:05.408207 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:06 crc kubenswrapper[4687]: I1203 17:41:06.406569 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:06 crc kubenswrapper[4687]: I1203 17:41:06.406627 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:06 crc kubenswrapper[4687]: E1203 17:41:06.406715 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:06 crc kubenswrapper[4687]: E1203 17:41:06.406965 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:06 crc kubenswrapper[4687]: I1203 17:41:06.946679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:06 crc kubenswrapper[4687]: E1203 17:41:06.946835 4687 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:41:06 crc kubenswrapper[4687]: E1203 17:41:06.946903 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs podName:2c067216-97d2-43a1-a8a6-5719153b3c61 nodeName:}" failed. No retries permitted until 2025-12-03 17:42:10.946884544 +0000 UTC m=+163.837579997 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs") pod "network-metrics-daemon-w8876" (UID: "2c067216-97d2-43a1-a8a6-5719153b3c61") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 17:41:07 crc kubenswrapper[4687]: I1203 17:41:07.407078 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:07 crc kubenswrapper[4687]: I1203 17:41:07.407104 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:07 crc kubenswrapper[4687]: E1203 17:41:07.415048 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:07 crc kubenswrapper[4687]: E1203 17:41:07.415201 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:08 crc kubenswrapper[4687]: I1203 17:41:08.406531 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:08 crc kubenswrapper[4687]: I1203 17:41:08.406576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:08 crc kubenswrapper[4687]: E1203 17:41:08.406861 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:08 crc kubenswrapper[4687]: E1203 17:41:08.406954 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:09 crc kubenswrapper[4687]: I1203 17:41:09.406386 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:09 crc kubenswrapper[4687]: I1203 17:41:09.406450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:09 crc kubenswrapper[4687]: E1203 17:41:09.406539 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:09 crc kubenswrapper[4687]: E1203 17:41:09.406681 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:10 crc kubenswrapper[4687]: I1203 17:41:10.406839 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:10 crc kubenswrapper[4687]: I1203 17:41:10.406855 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:10 crc kubenswrapper[4687]: E1203 17:41:10.407146 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:10 crc kubenswrapper[4687]: E1203 17:41:10.407230 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:11 crc kubenswrapper[4687]: I1203 17:41:11.406563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:11 crc kubenswrapper[4687]: I1203 17:41:11.406852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:11 crc kubenswrapper[4687]: E1203 17:41:11.406925 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:11 crc kubenswrapper[4687]: E1203 17:41:11.407211 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:12 crc kubenswrapper[4687]: I1203 17:41:12.406533 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:12 crc kubenswrapper[4687]: I1203 17:41:12.406598 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:12 crc kubenswrapper[4687]: E1203 17:41:12.407246 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:12 crc kubenswrapper[4687]: E1203 17:41:12.407439 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:13 crc kubenswrapper[4687]: I1203 17:41:13.406642 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:13 crc kubenswrapper[4687]: I1203 17:41:13.406783 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:13 crc kubenswrapper[4687]: E1203 17:41:13.407837 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:13 crc kubenswrapper[4687]: E1203 17:41:13.407245 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:14 crc kubenswrapper[4687]: I1203 17:41:14.414170 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:14 crc kubenswrapper[4687]: I1203 17:41:14.414170 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:14 crc kubenswrapper[4687]: E1203 17:41:14.414987 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:14 crc kubenswrapper[4687]: E1203 17:41:14.414894 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:15 crc kubenswrapper[4687]: I1203 17:41:15.406935 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:15 crc kubenswrapper[4687]: E1203 17:41:15.407190 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:15 crc kubenswrapper[4687]: I1203 17:41:15.407664 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:15 crc kubenswrapper[4687]: E1203 17:41:15.408053 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:16 crc kubenswrapper[4687]: I1203 17:41:16.406304 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:16 crc kubenswrapper[4687]: I1203 17:41:16.407235 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:41:16 crc kubenswrapper[4687]: I1203 17:41:16.406383 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:16 crc kubenswrapper[4687]: E1203 17:41:16.407379 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-668q2_openshift-ovn-kubernetes(f7fe22da-1ea3-49ba-b2c6-851ff064db76)\"" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" Dec 03 17:41:16 crc kubenswrapper[4687]: E1203 17:41:16.407484 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:16 crc kubenswrapper[4687]: E1203 17:41:16.407658 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:17 crc kubenswrapper[4687]: I1203 17:41:17.406821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:17 crc kubenswrapper[4687]: I1203 17:41:17.406760 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:17 crc kubenswrapper[4687]: E1203 17:41:17.408914 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:17 crc kubenswrapper[4687]: E1203 17:41:17.408997 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:18 crc kubenswrapper[4687]: I1203 17:41:18.407337 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:18 crc kubenswrapper[4687]: I1203 17:41:18.407483 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:18 crc kubenswrapper[4687]: E1203 17:41:18.407517 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:18 crc kubenswrapper[4687]: E1203 17:41:18.407792 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:19 crc kubenswrapper[4687]: I1203 17:41:19.406740 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:19 crc kubenswrapper[4687]: I1203 17:41:19.406835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:19 crc kubenswrapper[4687]: E1203 17:41:19.406906 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:19 crc kubenswrapper[4687]: E1203 17:41:19.407151 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:20 crc kubenswrapper[4687]: I1203 17:41:20.407274 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:20 crc kubenswrapper[4687]: I1203 17:41:20.407331 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:20 crc kubenswrapper[4687]: E1203 17:41:20.407410 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:20 crc kubenswrapper[4687]: E1203 17:41:20.407504 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:21 crc kubenswrapper[4687]: I1203 17:41:21.407401 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:21 crc kubenswrapper[4687]: I1203 17:41:21.407420 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:21 crc kubenswrapper[4687]: E1203 17:41:21.407579 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:21 crc kubenswrapper[4687]: E1203 17:41:21.407670 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:22 crc kubenswrapper[4687]: I1203 17:41:22.406763 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:22 crc kubenswrapper[4687]: I1203 17:41:22.406831 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:22 crc kubenswrapper[4687]: E1203 17:41:22.406904 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:22 crc kubenswrapper[4687]: E1203 17:41:22.407026 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:23 crc kubenswrapper[4687]: I1203 17:41:23.407308 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:23 crc kubenswrapper[4687]: E1203 17:41:23.407443 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:23 crc kubenswrapper[4687]: I1203 17:41:23.407590 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:23 crc kubenswrapper[4687]: E1203 17:41:23.407704 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.037203 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/1.log" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.037895 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/0.log" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.037943 4687 generic.go:334] "Generic (PLEG): container finished" podID="ede1a722-2df8-433e-b8be-82c434be7d02" containerID="d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870" exitCode=1 Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.037969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerDied","Data":"d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870"} Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.038001 4687 scope.go:117] "RemoveContainer" containerID="261da6f070abf68408fb77a76f7dd9763adb94ef45c3f0363a72a93c71771123" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.038358 4687 scope.go:117] "RemoveContainer" containerID="d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870" Dec 03 17:41:24 crc kubenswrapper[4687]: E1203 17:41:24.038497 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-kbjvs_openshift-multus(ede1a722-2df8-433e-b8be-82c434be7d02)\"" pod="openshift-multus/multus-kbjvs" podUID="ede1a722-2df8-433e-b8be-82c434be7d02" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.406560 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:24 crc kubenswrapper[4687]: I1203 17:41:24.406671 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:24 crc kubenswrapper[4687]: E1203 17:41:24.406688 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:24 crc kubenswrapper[4687]: E1203 17:41:24.406843 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:25 crc kubenswrapper[4687]: I1203 17:41:25.042177 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/1.log" Dec 03 17:41:25 crc kubenswrapper[4687]: I1203 17:41:25.407019 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:25 crc kubenswrapper[4687]: I1203 17:41:25.407066 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:25 crc kubenswrapper[4687]: E1203 17:41:25.407204 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:25 crc kubenswrapper[4687]: E1203 17:41:25.407293 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:26 crc kubenswrapper[4687]: I1203 17:41:26.406553 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:26 crc kubenswrapper[4687]: I1203 17:41:26.406659 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:26 crc kubenswrapper[4687]: E1203 17:41:26.406729 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:26 crc kubenswrapper[4687]: E1203 17:41:26.406867 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:27 crc kubenswrapper[4687]: I1203 17:41:27.406610 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:27 crc kubenswrapper[4687]: I1203 17:41:27.406803 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:27 crc kubenswrapper[4687]: E1203 17:41:27.408418 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:27 crc kubenswrapper[4687]: E1203 17:41:27.408538 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:27 crc kubenswrapper[4687]: E1203 17:41:27.431918 4687 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 17:41:27 crc kubenswrapper[4687]: E1203 17:41:27.504793 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 17:41:28 crc kubenswrapper[4687]: I1203 17:41:28.406563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:28 crc kubenswrapper[4687]: I1203 17:41:28.406605 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:28 crc kubenswrapper[4687]: E1203 17:41:28.406698 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:28 crc kubenswrapper[4687]: E1203 17:41:28.406833 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:29 crc kubenswrapper[4687]: I1203 17:41:29.406795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:29 crc kubenswrapper[4687]: E1203 17:41:29.406928 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:29 crc kubenswrapper[4687]: I1203 17:41:29.408418 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:29 crc kubenswrapper[4687]: E1203 17:41:29.408751 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:30 crc kubenswrapper[4687]: I1203 17:41:30.406763 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:30 crc kubenswrapper[4687]: E1203 17:41:30.406890 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:30 crc kubenswrapper[4687]: I1203 17:41:30.406784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:30 crc kubenswrapper[4687]: E1203 17:41:30.407200 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:30 crc kubenswrapper[4687]: I1203 17:41:30.408557 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.062394 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/3.log" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.064751 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerStarted","Data":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.065222 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.093768 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podStartSLOduration=103.093751228 podStartE2EDuration="1m43.093751228s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:31.092798583 +0000 UTC m=+123.983494026" watchObservedRunningTime="2025-12-03 17:41:31.093751228 +0000 UTC m=+123.984446661" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.317456 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w8876"] Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.317579 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:31 crc kubenswrapper[4687]: E1203 17:41:31.317683 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.406920 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:31 crc kubenswrapper[4687]: I1203 17:41:31.407117 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:31 crc kubenswrapper[4687]: E1203 17:41:31.407525 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:31 crc kubenswrapper[4687]: E1203 17:41:31.407599 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:32 crc kubenswrapper[4687]: I1203 17:41:32.407197 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:32 crc kubenswrapper[4687]: E1203 17:41:32.407443 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:32 crc kubenswrapper[4687]: E1203 17:41:32.505693 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 17:41:33 crc kubenswrapper[4687]: I1203 17:41:33.406666 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:33 crc kubenswrapper[4687]: I1203 17:41:33.406727 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:33 crc kubenswrapper[4687]: I1203 17:41:33.406758 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:33 crc kubenswrapper[4687]: E1203 17:41:33.406949 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:33 crc kubenswrapper[4687]: E1203 17:41:33.407070 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:33 crc kubenswrapper[4687]: E1203 17:41:33.407285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:34 crc kubenswrapper[4687]: I1203 17:41:34.406898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:34 crc kubenswrapper[4687]: E1203 17:41:34.407053 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:35 crc kubenswrapper[4687]: I1203 17:41:35.406236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:35 crc kubenswrapper[4687]: E1203 17:41:35.406364 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:35 crc kubenswrapper[4687]: I1203 17:41:35.406530 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:35 crc kubenswrapper[4687]: E1203 17:41:35.406581 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:35 crc kubenswrapper[4687]: I1203 17:41:35.406683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:35 crc kubenswrapper[4687]: E1203 17:41:35.406725 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:36 crc kubenswrapper[4687]: I1203 17:41:36.406723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:36 crc kubenswrapper[4687]: E1203 17:41:36.406891 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:37 crc kubenswrapper[4687]: I1203 17:41:37.406622 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:37 crc kubenswrapper[4687]: I1203 17:41:37.406657 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:37 crc kubenswrapper[4687]: I1203 17:41:37.407910 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:37 crc kubenswrapper[4687]: E1203 17:41:37.407903 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:37 crc kubenswrapper[4687]: E1203 17:41:37.408239 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:37 crc kubenswrapper[4687]: E1203 17:41:37.408339 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:37 crc kubenswrapper[4687]: I1203 17:41:37.408485 4687 scope.go:117] "RemoveContainer" containerID="d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870" Dec 03 17:41:37 crc kubenswrapper[4687]: E1203 17:41:37.506398 4687 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 17:41:38 crc kubenswrapper[4687]: I1203 17:41:38.091083 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/1.log" Dec 03 17:41:38 crc kubenswrapper[4687]: I1203 17:41:38.091171 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerStarted","Data":"c8b065c74150d6815ce9b20a20e9ba6c3845bb6ae5f88984b267fd3ee16190d9"} Dec 03 17:41:38 crc kubenswrapper[4687]: I1203 17:41:38.406515 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:38 crc kubenswrapper[4687]: E1203 17:41:38.406694 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:39 crc kubenswrapper[4687]: I1203 17:41:39.406896 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:39 crc kubenswrapper[4687]: I1203 17:41:39.406936 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:39 crc kubenswrapper[4687]: I1203 17:41:39.407157 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:39 crc kubenswrapper[4687]: E1203 17:41:39.407187 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:39 crc kubenswrapper[4687]: E1203 17:41:39.407264 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:39 crc kubenswrapper[4687]: E1203 17:41:39.407360 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:40 crc kubenswrapper[4687]: I1203 17:41:40.406523 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:40 crc kubenswrapper[4687]: E1203 17:41:40.406685 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:41 crc kubenswrapper[4687]: I1203 17:41:41.406634 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:41 crc kubenswrapper[4687]: I1203 17:41:41.406717 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:41 crc kubenswrapper[4687]: E1203 17:41:41.406807 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 03 17:41:41 crc kubenswrapper[4687]: I1203 17:41:41.406878 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:41 crc kubenswrapper[4687]: E1203 17:41:41.406993 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-w8876" podUID="2c067216-97d2-43a1-a8a6-5719153b3c61" Dec 03 17:41:41 crc kubenswrapper[4687]: E1203 17:41:41.407076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 03 17:41:42 crc kubenswrapper[4687]: I1203 17:41:42.406552 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:42 crc kubenswrapper[4687]: E1203 17:41:42.406739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.407368 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.407678 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.408168 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.410656 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.410983 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.411087 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.411222 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.412012 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 17:41:43 crc kubenswrapper[4687]: I1203 17:41:43.414796 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 17:41:44 crc kubenswrapper[4687]: I1203 17:41:44.406319 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:47 crc kubenswrapper[4687]: I1203 17:41:47.288103 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.678903 4687 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.732511 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.733790 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q8fqs"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.734006 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.735242 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.735738 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.736634 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.736808 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.737253 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.737355 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.737615 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.737840 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.739337 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hp9ll"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.739857 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.743262 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.743413 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.743709 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.745788 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.745878 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.745800 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.746244 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.746411 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.746929 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.749007 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.749256 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.757845 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.757851 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.758172 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.758847 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.758934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.761391 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.761543 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.763814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.764017 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.764223 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.764495 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.766080 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv4n7"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.766399 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.766468 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.766519 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.767726 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.767758 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.767783 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.767923 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.767968 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768003 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768220 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768286 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768357 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768494 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768657 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768831 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.768958 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.775317 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.776224 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.790378 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.812275 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttkxf"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.812800 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.813850 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.814157 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.814389 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.814445 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.815007 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gd77z"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.815641 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.816344 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.816680 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.817053 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-zrxg4"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.817786 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.818229 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.818517 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.818858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.818949 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.819163 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.819360 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.819438 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.819404 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.822822 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.823800 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.823800 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.826270 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.827035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.827941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.828047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.828971 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829181 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829314 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829425 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829527 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829592 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830406 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829627 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830928 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.831164 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.831258 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829896 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829993 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.829999 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830068 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830152 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830208 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830261 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830298 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830334 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830374 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.831751 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830418 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.832087 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.832162 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.831165 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.832090 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830546 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830584 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830621 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830648 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830690 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830722 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.830844 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.835924 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.837763 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.838313 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.838727 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.839027 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.843164 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.843577 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.844009 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.844879 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.845570 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.859355 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.863561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr4m9\" (UniqueName: \"kubernetes.io/projected/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-kube-api-access-zr4m9\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.863661 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.863697 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm7lv\" (UniqueName: \"kubernetes.io/projected/5df036a4-ff70-4a7c-8575-cb8c605cef1b-kube-api-access-gm7lv\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.863804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2spp6\" (UniqueName: \"kubernetes.io/projected/4899f97c-1e4f-4359-a5d4-427f5bd650a4-kube-api-access-2spp6\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.863877 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864045 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-config\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864084 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864153 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864278 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4899f97c-1e4f-4359-a5d4-427f5bd650a4-serving-cert\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864413 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-node-pullsecrets\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864495 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-policies\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864620 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.864733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865141 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-serving-cert\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-encryption-config\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865407 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865767 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.865821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-auth-proxy-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.866055 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.867664 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.867951 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.874336 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.874513 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.877881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.877970 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df036a4-ff70-4a7c-8575-cb8c605cef1b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878012 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-encryption-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878046 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kx5\" (UniqueName: \"kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-config\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878162 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-config\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93874fcd-039f-4572-9f35-24c20dfd93ce-serving-cert\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.878219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.880161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.880214 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40104d97-9e24-4792-927a-8861f63d1df0-machine-approver-tls\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.880241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-trusted-ca\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.880274 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.880298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhqkn\" (UniqueName: \"kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882581 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-dir\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882706 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc5p4\" (UniqueName: \"kubernetes.io/projected/40104d97-9e24-4792-927a-8861f63d1df0-kube-api-access-nc5p4\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcx2q\" (UniqueName: \"kubernetes.io/projected/93874fcd-039f-4572-9f35-24c20dfd93ce-kube-api-access-pcx2q\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882772 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882797 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882823 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882858 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-serving-cert\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882882 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q82d\" (UniqueName: \"kubernetes.io/projected/d5ac8a5c-1fe7-426d-a2f3-819000c75add-kube-api-access-2q82d\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dprss\" (UniqueName: \"kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-audit\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882959 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4fgf\" (UniqueName: \"kubernetes.io/projected/14170176-819b-413a-ae4b-8b62d7b606ba-kube-api-access-c4fgf\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.882986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883057 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-image-import-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883083 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-images\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-client\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883165 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclm7\" (UniqueName: \"kubernetes.io/projected/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-kube-api-access-rclm7\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883260 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883281 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-client\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.883305 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-audit-dir\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.884031 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lg5jg"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.884721 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.885589 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hp9ll"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.889909 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q8fqs"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.889971 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.890016 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.890352 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.890716 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.891094 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.891830 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.893868 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.896051 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.896251 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.899516 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.900398 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.901547 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.902270 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.905419 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.905911 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.906683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.917192 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.921484 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4cqf"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.922894 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.926078 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.926751 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.930279 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-4bjp6"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.930932 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.931308 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.931340 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.931720 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.931774 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.933073 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.933733 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.934715 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.935061 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.937204 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.937346 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.938235 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.938694 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.938710 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.939284 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.940007 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.940592 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7v42"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.941905 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.942093 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.942328 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.943304 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.944491 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.961993 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.963916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.979052 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.981211 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gd77z"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93874fcd-039f-4572-9f35-24c20dfd93ce-serving-cert\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-config\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40104d97-9e24-4792-927a-8861f63d1df0-machine-approver-tls\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-trusted-ca\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987897 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhqkn\" (UniqueName: \"kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-dir\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.987998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc5p4\" (UniqueName: \"kubernetes.io/projected/40104d97-9e24-4792-927a-8861f63d1df0-kube-api-access-nc5p4\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcx2q\" (UniqueName: \"kubernetes.io/projected/93874fcd-039f-4572-9f35-24c20dfd93ce-kube-api-access-pcx2q\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988135 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988161 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-serving-cert\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q82d\" (UniqueName: \"kubernetes.io/projected/d5ac8a5c-1fe7-426d-a2f3-819000c75add-kube-api-access-2q82d\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988231 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dprss\" (UniqueName: \"kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988262 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988283 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-audit\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4fgf\" (UniqueName: \"kubernetes.io/projected/14170176-819b-413a-ae4b-8b62d7b606ba-kube-api-access-c4fgf\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988373 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988395 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-image-import-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-images\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988467 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-client\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclm7\" (UniqueName: \"kubernetes.io/projected/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-kube-api-access-rclm7\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crlpc\" (UniqueName: \"kubernetes.io/projected/2fa7fe3b-4230-4cbe-a1f5-461458f1d95d-kube-api-access-crlpc\") pod \"downloads-7954f5f757-zrxg4\" (UID: \"2fa7fe3b-4230-4cbe-a1f5-461458f1d95d\") " pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988652 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-client\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988674 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-audit-dir\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988756 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm7lv\" (UniqueName: \"kubernetes.io/projected/5df036a4-ff70-4a7c-8575-cb8c605cef1b-kube-api-access-gm7lv\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988965 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2spp6\" (UniqueName: \"kubernetes.io/projected/4899f97c-1e4f-4359-a5d4-427f5bd650a4-kube-api-access-2spp6\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.988986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr4m9\" (UniqueName: \"kubernetes.io/projected/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-kube-api-access-zr4m9\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989008 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989030 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-config\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989066 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989175 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4899f97c-1e4f-4359-a5d4-427f5bd650a4-serving-cert\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-node-pullsecrets\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989220 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-policies\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989241 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989262 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989284 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-serving-cert\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989305 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-encryption-config\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989358 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-auth-proxy-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989502 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df036a4-ff70-4a7c-8575-cb8c605cef1b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-encryption-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kx5\" (UniqueName: \"kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.989566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-config\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.990404 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.990478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-config\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.991311 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj"] Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.992575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.992634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.992802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-dir\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.992840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.995002 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.997474 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-node-pullsecrets\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.998422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:51 crc kubenswrapper[4687]: I1203 17:41:51.999341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.001011 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.001075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.001721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.002009 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.002580 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-serving-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.003192 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.003267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-images\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.003449 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-image-import-ca\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.003894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-audit-policies\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.008724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.008736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4899f97c-1e4f-4359-a5d4-427f5bd650a4-trusted-ca\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.009212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14170176-819b-413a-ae4b-8b62d7b606ba-audit-dir\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.009964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.010218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.010722 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.010876 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-config\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.011328 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/14170176-819b-413a-ae4b-8b62d7b606ba-audit\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.012680 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93874fcd-039f-4572-9f35-24c20dfd93ce-serving-cert\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.013534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.013725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.013999 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/40104d97-9e24-4792-927a-8861f63d1df0-auth-proxy-config\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.014144 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.014113 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttkxf"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.014236 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.014255 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv4n7"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.014681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.015108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-etcd-client\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.015270 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.015582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-service-ca-bundle\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.015770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.015942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93874fcd-039f-4572-9f35-24c20dfd93ce-config\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.016092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4899f97c-1e4f-4359-a5d4-427f5bd650a4-serving-cert\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.016869 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.017262 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ac8a5c-1fe7-426d-a2f3-819000c75add-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.017627 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.018394 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.018343 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.018651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-encryption-config\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.019299 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-serving-cert\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.019355 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.019979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.021339 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.021381 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s5qm9"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.021757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5df036a4-ff70-4a7c-8575-cb8c605cef1b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.022098 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.023045 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pc4n2"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.023629 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.025043 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.026575 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.026774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5ac8a5c-1fe7-426d-a2f3-819000c75add-serving-cert\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.027892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/40104d97-9e24-4792-927a-8861f63d1df0-machine-approver-tls\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028211 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028106 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028360 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028787 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.028941 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-etcd-client\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.029827 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.030106 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14170176-819b-413a-ae4b-8b62d7b606ba-encryption-config\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.030557 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.031583 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.032582 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.033586 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.034999 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.035584 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-phlmz"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.036516 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.036630 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.036753 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lg5jg"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.037770 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.038809 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.039932 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.041010 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pc4n2"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.042109 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.043191 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrxg4"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.045353 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.048095 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kv9nd"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.049888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.050108 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4cqf"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.051745 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.053435 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.055588 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.057151 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.057311 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.058354 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kv9nd"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.059551 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.060887 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.061986 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.063147 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.064171 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s5qm9"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.065961 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7v42"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.067036 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2wpcl"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.068106 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.068664 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2wpcl"] Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.076646 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.094497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crlpc\" (UniqueName: \"kubernetes.io/projected/2fa7fe3b-4230-4cbe-a1f5-461458f1d95d-kube-api-access-crlpc\") pod \"downloads-7954f5f757-zrxg4\" (UID: \"2fa7fe3b-4230-4cbe-a1f5-461458f1d95d\") " pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.098255 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.116108 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.135597 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.156434 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.176326 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.196581 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.217087 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.237104 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.256072 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.276429 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.296858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.316489 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.338183 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.357874 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.385049 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.396663 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.416935 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.437747 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.456907 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.476878 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.517382 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.537555 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.556865 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.578048 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.597897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.616861 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.637181 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.656736 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.676952 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.697920 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.717473 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.737386 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.756287 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.777523 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.797984 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.817309 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.836803 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.857685 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.877025 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.897203 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.916691 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.935476 4687 request.go:700] Waited for 1.001418613s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.938634 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.957569 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.977185 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 17:41:52 crc kubenswrapper[4687]: I1203 17:41:52.996986 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.016904 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.037175 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.057755 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.077993 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.097457 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.117523 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.137866 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.157740 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.177528 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.205858 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.221438 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.238534 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.256781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.277661 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.296849 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.316490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.336780 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.356017 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.377595 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.397296 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.411393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.411612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:53 crc kubenswrapper[4687]: E1203 17:41:53.411797 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:43:55.411736923 +0000 UTC m=+268.302432466 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.411919 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.416894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.436919 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.457266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.497224 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.498857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhqkn\" (UniqueName: \"kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn\") pod \"oauth-openshift-558db77b4-nv4n7\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.513112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.513212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.519263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.521502 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.534921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc5p4\" (UniqueName: \"kubernetes.io/projected/40104d97-9e24-4792-927a-8861f63d1df0-kube-api-access-nc5p4\") pod \"machine-approver-56656f9798-5f7jg\" (UID: \"40104d97-9e24-4792-927a-8861f63d1df0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.552145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcx2q\" (UniqueName: \"kubernetes.io/projected/93874fcd-039f-4572-9f35-24c20dfd93ce-kube-api-access-pcx2q\") pod \"authentication-operator-69f744f599-hp9ll\" (UID: \"93874fcd-039f-4572-9f35-24c20dfd93ce\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.568282 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.581362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dprss\" (UniqueName: \"kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss\") pod \"controller-manager-879f6c89f-4r92g\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:53 crc kubenswrapper[4687]: W1203 17:41:53.585978 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40104d97_9e24_4792_927a_8861f63d1df0.slice/crio-2dd8f0abd8c1d65d48f0e406afad3872b58d1d529016a17c1e39d180b3c83aa6 WatchSource:0}: Error finding container 2dd8f0abd8c1d65d48f0e406afad3872b58d1d529016a17c1e39d180b3c83aa6: Status 404 returned error can't find the container with id 2dd8f0abd8c1d65d48f0e406afad3872b58d1d529016a17c1e39d180b3c83aa6 Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.624826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2spp6\" (UniqueName: \"kubernetes.io/projected/4899f97c-1e4f-4359-a5d4-427f5bd650a4-kube-api-access-2spp6\") pod \"console-operator-58897d9998-ttkxf\" (UID: \"4899f97c-1e4f-4359-a5d4-427f5bd650a4\") " pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.625322 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.635811 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.644029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr4m9\" (UniqueName: \"kubernetes.io/projected/a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b-kube-api-access-zr4m9\") pod \"openshift-apiserver-operator-796bbdcf4f-qxrl7\" (UID: \"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.664010 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q82d\" (UniqueName: \"kubernetes.io/projected/d5ac8a5c-1fe7-426d-a2f3-819000c75add-kube-api-access-2q82d\") pod \"apiserver-7bbb656c7d-h58rq\" (UID: \"d5ac8a5c-1fe7-426d-a2f3-819000c75add\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.681300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4fgf\" (UniqueName: \"kubernetes.io/projected/14170176-819b-413a-ae4b-8b62d7b606ba-kube-api-access-c4fgf\") pod \"apiserver-76f77b778f-gd77z\" (UID: \"14170176-819b-413a-ae4b-8b62d7b606ba\") " pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.684130 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.700078 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kx5\" (UniqueName: \"kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5\") pod \"route-controller-manager-6576b87f9c-5qrlx\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.702017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.709425 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.712925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclm7\" (UniqueName: \"kubernetes.io/projected/bcfb21f2-e1fe-42f0-b166-a2f50847cc6b-kube-api-access-rclm7\") pod \"machine-api-operator-5694c8668f-q8fqs\" (UID: \"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.717637 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.723298 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.728234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.735245 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.737388 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.756543 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.776354 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.796565 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.816477 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.836069 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.854580 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.856668 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.877486 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.897083 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.915889 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.917253 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.938269 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.944998 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.954686 4687 request.go:700] Waited for 1.904423739s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.957080 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 17:41:53 crc kubenswrapper[4687]: I1203 17:41:53.977311 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:53.997047 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.017254 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.036834 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.057467 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.098609 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crlpc\" (UniqueName: \"kubernetes.io/projected/2fa7fe3b-4230-4cbe-a1f5-461458f1d95d-kube-api-access-crlpc\") pod \"downloads-7954f5f757-zrxg4\" (UID: \"2fa7fe3b-4230-4cbe-a1f5-461458f1d95d\") " pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.152846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" event={"ID":"40104d97-9e24-4792-927a-8861f63d1df0","Type":"ContainerStarted","Data":"2dd8f0abd8c1d65d48f0e406afad3872b58d1d529016a17c1e39d180b3c83aa6"} Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.521651 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.522111 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.522225 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.522385 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.522513 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.522694 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.022670071 +0000 UTC m=+147.913365544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.523074 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.523068 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.528335 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm7lv\" (UniqueName: \"kubernetes.io/projected/5df036a4-ff70-4a7c-8575-cb8c605cef1b-kube-api-access-gm7lv\") pod \"cluster-samples-operator-665b6dd947-qxn9b\" (UID: \"5df036a4-ff70-4a7c-8575-cb8c605cef1b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.625152 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.625467 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.125419263 +0000 UTC m=+148.016114716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.625543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.625580 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.625616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-config\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.625631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccc53c5b-df64-41ca-bee7-9497d7082fec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627239 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627372 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-client\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627429 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gp92\" (UniqueName: \"kubernetes.io/projected/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-kube-api-access-6gp92\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122d3933-8a23-4268-b5e6-9908f55537c0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbqc7\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-kube-api-access-nbqc7\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627545 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvspw\" (UniqueName: \"kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627562 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55068ff7-230e-4368-aa62-4b4262d614ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627578 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a827c9a9-8ab5-4135-b82d-032a234d0ab0-config\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627610 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssxz4\" (UniqueName: \"kubernetes.io/projected/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-kube-api-access-ssxz4\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627643 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a827c9a9-8ab5-4135-b82d-032a234d0ab0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gv5n\" (UniqueName: \"kubernetes.io/projected/122d3933-8a23-4268-b5e6-9908f55537c0-kube-api-access-5gv5n\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627699 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827c9a9-8ab5-4135-b82d-032a234d0ab0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627745 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627785 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627811 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/122d3933-8a23-4268-b5e6-9908f55537c0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627854 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627870 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c1f9ec-786b-4f7b-9244-cb29ea924da9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627921 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627960 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rdj\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-serving-cert\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.627994 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc53c5b-df64-41ca-bee7-9497d7082fec-config\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c1f9ec-786b-4f7b-9244-cb29ea924da9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn26s\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-kube-api-access-jn26s\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-serving-cert\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628182 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628258 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1f9ec-786b-4f7b-9244-cb29ea924da9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628310 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55068ff7-230e-4368-aa62-4b4262d614ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628327 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628365 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc53c5b-df64-41ca-bee7-9497d7082fec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.628382 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-service-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.633292 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.633755 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.133735027 +0000 UTC m=+148.024430460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.637309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.637295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.637870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.730808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.731060 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.231021503 +0000 UTC m=+148.121716946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731702 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgp28\" (UniqueName: \"kubernetes.io/projected/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-kube-api-access-sgp28\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a827c9a9-8ab5-4135-b82d-032a234d0ab0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731757 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f211703d-9bfe-4c35-a761-4f0a572ff317-serving-cert\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731778 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/546db82f-4ba0-4b13-a501-064e42360219-proxy-tls\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gv5n\" (UniqueName: \"kubernetes.io/projected/122d3933-8a23-4268-b5e6-9908f55537c0-kube-api-access-5gv5n\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-cabundle\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731840 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731860 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731879 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731908 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe116b4-b00f-4f26-8456-cfb815889fd0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-socket-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/122d3933-8a23-4268-b5e6-9908f55537c0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.731975 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc53c5b-df64-41ca-bee7-9497d7082fec-config\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732025 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732052 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c1f9ec-786b-4f7b-9244-cb29ea924da9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-node-bootstrap-token\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732101 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2rp\" (UniqueName: \"kubernetes.io/projected/ce78ab98-f777-4d37-a63f-7c58b2281d8e-kube-api-access-kv2rp\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732159 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-mountpoint-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732190 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbbw\" (UniqueName: \"kubernetes.io/projected/940c5227-c4b5-4142-92b6-63b408453159-kube-api-access-tnbbw\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-srv-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732283 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732308 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp7n\" (UniqueName: \"kubernetes.io/projected/1fa9972a-0306-4ae6-9cef-d7d98214d25c-kube-api-access-fjp7n\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55068ff7-230e-4368-aa62-4b4262d614ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732364 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xvg\" (UniqueName: \"kubernetes.io/projected/ebe116b4-b00f-4f26-8456-cfb815889fd0-kube-api-access-v2xvg\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732396 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-service-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732452 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8s2t\" (UniqueName: \"kubernetes.io/projected/987a223d-f20b-4288-bd46-cfaecfbd13c7-kube-api-access-d8s2t\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732480 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-apiservice-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732526 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-config\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjp5v\" (UniqueName: \"kubernetes.io/projected/c54388db-0d69-415b-99b8-e60ac35caac2-kube-api-access-bjp5v\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732594 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6bk\" (UniqueName: \"kubernetes.io/projected/e248449e-8a3d-418a-8f0f-0b8484d27c39-kube-api-access-jl6bk\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/229175a9-fd55-4fd3-a02f-d5087886fe2b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-csi-data-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732743 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l9zv\" (UniqueName: \"kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732804 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-client\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9p2\" (UniqueName: \"kubernetes.io/projected/546db82f-4ba0-4b13-a501-064e42360219-kube-api-access-5d9p2\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732908 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f211703d-9bfe-4c35-a761-4f0a572ff317-config\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732938 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-srv-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.732968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-webhook-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122d3933-8a23-4268-b5e6-9908f55537c0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733035 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733058 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqxb\" (UniqueName: \"kubernetes.io/projected/4ed99dad-799d-4601-b839-67fa75f22951-kube-api-access-6tqxb\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8sct\" (UniqueName: \"kubernetes.io/projected/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-kube-api-access-h8sct\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff606c22-18f6-4abd-b36a-4650378861d1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733154 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zptbq\" (UniqueName: \"kubernetes.io/projected/d5d70eb6-6676-49c2-8853-55084c991036-kube-api-access-zptbq\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-metrics-tls\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxr8w\" (UniqueName: \"kubernetes.io/projected/229175a9-fd55-4fd3-a02f-d5087886fe2b-kube-api-access-fxr8w\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc53c5b-df64-41ca-bee7-9497d7082fec-config\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733269 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.733339 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssxz4\" (UniqueName: \"kubernetes.io/projected/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-kube-api-access-ssxz4\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.734665 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-service-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.735772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55068ff7-230e-4368-aa62-4b4262d614ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.735786 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-key\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.735902 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827c9a9-8ab5-4135-b82d-032a234d0ab0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.735933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-ca\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736267 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/355affcf-dc10-4be7-9500-136e8d4e795b-config-volume\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e248449e-8a3d-418a-8f0f-0b8484d27c39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736503 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-config\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736520 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c1f9ec-786b-4f7b-9244-cb29ea924da9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736585 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-plugins-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rdj\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.736816 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.236802533 +0000 UTC m=+148.127497966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736862 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-serving-cert\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff606c22-18f6-4abd-b36a-4650378861d1-proxy-tls\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.736929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122d3933-8a23-4268-b5e6-9908f55537c0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737094 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-metrics-certs\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737167 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e256d34-28bd-40a7-a14b-d76d21fbea56-tmpfs\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn26s\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-kube-api-access-jn26s\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-serving-cert\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737332 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8z5m\" (UniqueName: \"kubernetes.io/projected/355affcf-dc10-4be7-9500-136e8d4e795b-kube-api-access-s8z5m\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737367 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42x2j\" (UniqueName: \"kubernetes.io/projected/850c0a70-321b-4889-85d5-9873c7d1cdad-kube-api-access-42x2j\") pod \"migrator-59844c95c7-kt7gh\" (UID: \"850c0a70-321b-4889-85d5-9873c7d1cdad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940c5227-c4b5-4142-92b6-63b408453159-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737431 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1f9ec-786b-4f7b-9244-cb29ea924da9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737500 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc53c5b-df64-41ca-bee7-9497d7082fec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737528 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-default-certificate\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.737595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.738959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.739849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740139 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740405 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-registration-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740573 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740819 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccc53c5b-df64-41ca-bee7-9497d7082fec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.740924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741039 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07c1f9ec-786b-4f7b-9244-cb29ea924da9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-stats-auth\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fa9972a-0306-4ae6-9cef-d7d98214d25c-cert\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741567 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-images\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-certs\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5d70eb6-6676-49c2-8853-55084c991036-service-ca-bundle\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741713 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxvzd\" (UniqueName: \"kubernetes.io/projected/5e256d34-28bd-40a7-a14b-d76d21fbea56-kube-api-access-wxvzd\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741800 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gp92\" (UniqueName: \"kubernetes.io/projected/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-kube-api-access-6gp92\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.741964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742025 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742075 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742115 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb27h\" (UniqueName: \"kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcj6v\" (UniqueName: \"kubernetes.io/projected/ff606c22-18f6-4abd-b36a-4650378861d1-kube-api-access-pcj6v\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742302 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/355affcf-dc10-4be7-9500-136e8d4e795b-metrics-tls\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742350 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940c5227-c4b5-4142-92b6-63b408453159-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbqc7\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-kube-api-access-nbqc7\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742532 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbgl\" (UniqueName: \"kubernetes.io/projected/f211703d-9bfe-4c35-a761-4f0a572ff317-kube-api-access-2rbgl\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvspw\" (UniqueName: \"kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55068ff7-230e-4368-aa62-4b4262d614ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.742640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a827c9a9-8ab5-4135-b82d-032a234d0ab0-config\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.743083 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/122d3933-8a23-4268-b5e6-9908f55537c0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.743385 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a827c9a9-8ab5-4135-b82d-032a234d0ab0-config\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.744405 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.745430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.745853 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c1f9ec-786b-4f7b-9244-cb29ea924da9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.748894 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-serving-cert\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.749624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.759670 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc53c5b-df64-41ca-bee7-9497d7082fec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.760860 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-serving-cert\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.761298 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-etcd-client\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.761672 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a827c9a9-8ab5-4135-b82d-032a234d0ab0-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.762178 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/55068ff7-230e-4368-aa62-4b4262d614ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.776250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07c1f9ec-786b-4f7b-9244-cb29ea924da9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-hj48t\" (UID: \"07c1f9ec-786b-4f7b-9244-cb29ea924da9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.807241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gv5n\" (UniqueName: \"kubernetes.io/projected/122d3933-8a23-4268-b5e6-9908f55537c0-kube-api-access-5gv5n\") pod \"openshift-controller-manager-operator-756b6f6bc6-rkbht\" (UID: \"122d3933-8a23-4268-b5e6-9908f55537c0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.821345 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.829194 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.840398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssxz4\" (UniqueName: \"kubernetes.io/projected/8f40af59-1544-4694-a1b7-2a6eee4bc2c8-kube-api-access-ssxz4\") pod \"etcd-operator-b45778765-lg5jg\" (UID: \"8f40af59-1544-4694-a1b7-2a6eee4bc2c8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.845759 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.845925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9p2\" (UniqueName: \"kubernetes.io/projected/546db82f-4ba0-4b13-a501-064e42360219-kube-api-access-5d9p2\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.845948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f211703d-9bfe-4c35-a761-4f0a572ff317-config\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.845968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-webhook-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.845982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-srv-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846001 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846018 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqxb\" (UniqueName: \"kubernetes.io/projected/4ed99dad-799d-4601-b839-67fa75f22951-kube-api-access-6tqxb\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff606c22-18f6-4abd-b36a-4650378861d1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8sct\" (UniqueName: \"kubernetes.io/projected/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-kube-api-access-h8sct\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zptbq\" (UniqueName: \"kubernetes.io/projected/d5d70eb6-6676-49c2-8853-55084c991036-kube-api-access-zptbq\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-metrics-tls\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846112 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxr8w\" (UniqueName: \"kubernetes.io/projected/229175a9-fd55-4fd3-a02f-d5087886fe2b-kube-api-access-fxr8w\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-key\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846171 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e248449e-8a3d-418a-8f0f-0b8484d27c39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/355affcf-dc10-4be7-9500-136e8d4e795b-config-volume\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.846309 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.346251846 +0000 UTC m=+148.236947279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-plugins-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff606c22-18f6-4abd-b36a-4650378861d1-proxy-tls\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-metrics-certs\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e256d34-28bd-40a7-a14b-d76d21fbea56-tmpfs\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8z5m\" (UniqueName: \"kubernetes.io/projected/355affcf-dc10-4be7-9500-136e8d4e795b-kube-api-access-s8z5m\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846690 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940c5227-c4b5-4142-92b6-63b408453159-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f211703d-9bfe-4c35-a761-4f0a572ff317-config\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846712 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42x2j\" (UniqueName: \"kubernetes.io/projected/850c0a70-321b-4889-85d5-9873c7d1cdad-kube-api-access-42x2j\") pod \"migrator-59844c95c7-kt7gh\" (UID: \"850c0a70-321b-4889-85d5-9873c7d1cdad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846754 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/355affcf-dc10-4be7-9500-136e8d4e795b-config-volume\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-default-certificate\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846858 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-registration-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846881 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-stats-auth\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846913 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fa9972a-0306-4ae6-9cef-d7d98214d25c-cert\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846930 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-images\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.846968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5d70eb6-6676-49c2-8853-55084c991036-service-ca-bundle\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847000 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-certs\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxvzd\" (UniqueName: \"kubernetes.io/projected/5e256d34-28bd-40a7-a14b-d76d21fbea56-kube-api-access-wxvzd\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847101 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847160 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb27h\" (UniqueName: \"kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847182 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcj6v\" (UniqueName: \"kubernetes.io/projected/ff606c22-18f6-4abd-b36a-4650378861d1-kube-api-access-pcj6v\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rbgl\" (UniqueName: \"kubernetes.io/projected/f211703d-9bfe-4c35-a761-4f0a572ff317-kube-api-access-2rbgl\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847260 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/355affcf-dc10-4be7-9500-136e8d4e795b-metrics-tls\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940c5227-c4b5-4142-92b6-63b408453159-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847313 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847374 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgp28\" (UniqueName: \"kubernetes.io/projected/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-kube-api-access-sgp28\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847407 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f211703d-9bfe-4c35-a761-4f0a572ff317-serving-cert\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/546db82f-4ba0-4b13-a501-064e42360219-proxy-tls\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-cabundle\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe116b4-b00f-4f26-8456-cfb815889fd0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847557 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-socket-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847617 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-mountpoint-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847661 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-node-bootstrap-token\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2rp\" (UniqueName: \"kubernetes.io/projected/ce78ab98-f777-4d37-a63f-7c58b2281d8e-kube-api-access-kv2rp\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbbw\" (UniqueName: \"kubernetes.io/projected/940c5227-c4b5-4142-92b6-63b408453159-kube-api-access-tnbbw\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847755 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-srv-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847787 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp7n\" (UniqueName: \"kubernetes.io/projected/1fa9972a-0306-4ae6-9cef-d7d98214d25c-kube-api-access-fjp7n\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2xvg\" (UniqueName: \"kubernetes.io/projected/ebe116b4-b00f-4f26-8456-cfb815889fd0-kube-api-access-v2xvg\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8s2t\" (UniqueName: \"kubernetes.io/projected/987a223d-f20b-4288-bd46-cfaecfbd13c7-kube-api-access-d8s2t\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-apiservice-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847943 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjp5v\" (UniqueName: \"kubernetes.io/projected/c54388db-0d69-415b-99b8-e60ac35caac2-kube-api-access-bjp5v\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847961 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6bk\" (UniqueName: \"kubernetes.io/projected/e248449e-8a3d-418a-8f0f-0b8484d27c39-kube-api-access-jl6bk\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.847989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/229175a9-fd55-4fd3-a02f-d5087886fe2b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.848027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-csi-data-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.848047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.848083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l9zv\" (UniqueName: \"kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.848608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-registration-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.852196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-cabundle\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.852676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940c5227-c4b5-4142-92b6-63b408453159-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.853473 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-plugins-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.857977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5e256d34-28bd-40a7-a14b-d76d21fbea56-tmpfs\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.859112 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.859271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-images\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.859543 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5d70eb6-6676-49c2-8853-55084c991036-service-ca-bundle\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.859640 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-socket-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.860089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/546db82f-4ba0-4b13-a501-064e42360219-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.860164 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-mountpoint-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.861252 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/355affcf-dc10-4be7-9500-136e8d4e795b-metrics-tls\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.861582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff606c22-18f6-4abd-b36a-4650378861d1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.861893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ed99dad-799d-4601-b839-67fa75f22951-csi-data-dir\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.863493 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-webhook-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.863864 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-default-certificate\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.864377 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-srv-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.864763 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-srv-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.867897 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-stats-auth\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.873576 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/546db82f-4ba0-4b13-a501-064e42360219-proxy-tls\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.874826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.875597 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-metrics-tls\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.875886 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/229175a9-fd55-4fd3-a02f-d5087886fe2b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.876346 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/987a223d-f20b-4288-bd46-cfaecfbd13c7-profile-collector-cert\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.876515 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.876726 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebe116b4-b00f-4f26-8456-cfb815889fd0-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.876996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce78ab98-f777-4d37-a63f-7c58b2281d8e-signing-key\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.877302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e256d34-28bd-40a7-a14b-d76d21fbea56-apiservice-cert\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.877476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-node-bootstrap-token\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.878303 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-certs\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.878418 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c54388db-0d69-415b-99b8-e60ac35caac2-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.880307 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5d70eb6-6676-49c2-8853-55084c991036-metrics-certs\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.885569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f211703d-9bfe-4c35-a761-4f0a572ff317-serving-cert\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.885839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1fa9972a-0306-4ae6-9cef-d7d98214d25c-cert\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.885901 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a827c9a9-8ab5-4135-b82d-032a234d0ab0-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hwwgb\" (UID: \"a827c9a9-8ab5-4135-b82d-032a234d0ab0\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.887232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e248449e-8a3d-418a-8f0f-0b8484d27c39-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.891157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff606c22-18f6-4abd-b36a-4650378861d1-proxy-tls\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.899325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940c5227-c4b5-4142-92b6-63b408453159-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.900887 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn26s\" (UniqueName: \"kubernetes.io/projected/55068ff7-230e-4368-aa62-4b4262d614ce-kube-api-access-jn26s\") pod \"ingress-operator-5b745b69d9-ts2g8\" (UID: \"55068ff7-230e-4368-aa62-4b4262d614ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.901046 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rdj\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.923709 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-q8fqs"] Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.924841 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.928817 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.936736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccc53c5b-df64-41ca-bee7-9497d7082fec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xpt4f\" (UID: \"ccc53c5b-df64-41ca-bee7-9497d7082fec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.948426 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.949148 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:54 crc kubenswrapper[4687]: E1203 17:41:54.949582 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.449566663 +0000 UTC m=+148.340262096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.970863 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gp92\" (UniqueName: \"kubernetes.io/projected/7e85c769-bd22-49ed-b5b4-8bfd40d7027a-kube-api-access-6gp92\") pod \"openshift-config-operator-7777fb866f-s7bxb\" (UID: \"7e85c769-bd22-49ed-b5b4-8bfd40d7027a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: W1203 17:41:54.979517 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcfb21f2_e1fe_42f0_b166_a2f50847cc6b.slice/crio-274bad94b72e4ae885df9168c624dc07e606ebadeb51152efa5a8e240bdbc68d WatchSource:0}: Error finding container 274bad94b72e4ae885df9168c624dc07e606ebadeb51152efa5a8e240bdbc68d: Status 404 returned error can't find the container with id 274bad94b72e4ae885df9168c624dc07e606ebadeb51152efa5a8e240bdbc68d Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.980795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:54 crc kubenswrapper[4687]: I1203 17:41:54.998646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbqc7\" (UniqueName: \"kubernetes.io/projected/483b2f56-58e8-4a3a-9b7f-1126d1da77d2-kube-api-access-nbqc7\") pod \"cluster-image-registry-operator-dc59b4c8b-wskjl\" (UID: \"483b2f56-58e8-4a3a-9b7f-1126d1da77d2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.017424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvspw\" (UniqueName: \"kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw\") pod \"console-f9d7485db-mkvps\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.049799 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.050306 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.550288913 +0000 UTC m=+148.440984346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.062108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9p2\" (UniqueName: \"kubernetes.io/projected/546db82f-4ba0-4b13-a501-064e42360219-kube-api-access-5d9p2\") pod \"machine-config-operator-74547568cd-swfdh\" (UID: \"546db82f-4ba0-4b13-a501-064e42360219\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.069538 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.078392 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.080834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8sct\" (UniqueName: \"kubernetes.io/projected/bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4-kube-api-access-h8sct\") pod \"machine-config-server-phlmz\" (UID: \"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4\") " pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.087480 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.093650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l9zv\" (UniqueName: \"kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv\") pod \"collect-profiles-29413050-vk7fm\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.099952 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.111523 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.113637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb27h\" (UniqueName: \"kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h\") pod \"marketplace-operator-79b997595-774pl\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.133236 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.138704 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcj6v\" (UniqueName: \"kubernetes.io/projected/ff606c22-18f6-4abd-b36a-4650378861d1-kube-api-access-pcj6v\") pod \"machine-config-controller-84d6567774-qbspz\" (UID: \"ff606c22-18f6-4abd-b36a-4650378861d1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.151940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.153036 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.653020794 +0000 UTC m=+148.543716227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.154424 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rbgl\" (UniqueName: \"kubernetes.io/projected/f211703d-9bfe-4c35-a761-4f0a572ff317-kube-api-access-2rbgl\") pod \"service-ca-operator-777779d784-8dr4q\" (UID: \"f211703d-9bfe-4c35-a761-4f0a572ff317\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.161234 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.173849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbbw\" (UniqueName: \"kubernetes.io/projected/940c5227-c4b5-4142-92b6-63b408453159-kube-api-access-tnbbw\") pod \"kube-storage-version-migrator-operator-b67b599dd-74znc\" (UID: \"940c5227-c4b5-4142-92b6-63b408453159\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.186181 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.187739 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" event={"ID":"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b","Type":"ContainerStarted","Data":"274bad94b72e4ae885df9168c624dc07e606ebadeb51152efa5a8e240bdbc68d"} Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.190149 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"413b1701b7bf5a54da88d312ad0131931e6124503f81b5f830823155ec32eb42"} Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.190988 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" event={"ID":"f2b2ecfc-7839-4364-9e65-988bb4f666f5","Type":"ContainerStarted","Data":"81b5a45e7a039c668c0edb420228f371f38b066711d190d607f696d6f56e6c6d"} Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.196855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxr8w\" (UniqueName: \"kubernetes.io/projected/229175a9-fd55-4fd3-a02f-d5087886fe2b-kube-api-access-fxr8w\") pod \"multus-admission-controller-857f4d67dd-w7v42\" (UID: \"229175a9-fd55-4fd3-a02f-d5087886fe2b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.197920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" event={"ID":"40104d97-9e24-4792-927a-8861f63d1df0","Type":"ContainerStarted","Data":"449c21e2022dca233f17c4ad295de7853146795a4237619bb0e008c9a354612f"} Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.197962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" event={"ID":"40104d97-9e24-4792-927a-8861f63d1df0","Type":"ContainerStarted","Data":"5bfce467cb62a57999bd9e2e968adfb04033eb721e7ce51ba6e310c334e77a00"} Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.198821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.213488 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.216952 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.225096 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zptbq\" (UniqueName: \"kubernetes.io/projected/d5d70eb6-6676-49c2-8853-55084c991036-kube-api-access-zptbq\") pod \"router-default-5444994796-4bjp6\" (UID: \"d5d70eb6-6676-49c2-8853-55084c991036\") " pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.231013 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.241093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp7n\" (UniqueName: \"kubernetes.io/projected/1fa9972a-0306-4ae6-9cef-d7d98214d25c-kube-api-access-fjp7n\") pod \"ingress-canary-2wpcl\" (UID: \"1fa9972a-0306-4ae6-9cef-d7d98214d25c\") " pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.251784 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.254536 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.254926 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.754864015 +0000 UTC m=+148.645559448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.260690 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2xvg\" (UniqueName: \"kubernetes.io/projected/ebe116b4-b00f-4f26-8456-cfb815889fd0-kube-api-access-v2xvg\") pod \"package-server-manager-789f6589d5-bn658\" (UID: \"ebe116b4-b00f-4f26-8456-cfb815889fd0\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.265511 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-phlmz" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.278178 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8z5m\" (UniqueName: \"kubernetes.io/projected/355affcf-dc10-4be7-9500-136e8d4e795b-kube-api-access-s8z5m\") pod \"dns-default-pc4n2\" (UID: \"355affcf-dc10-4be7-9500-136e8d4e795b\") " pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.295018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2wpcl" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.310080 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-zrxg4"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.316674 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8s2t\" (UniqueName: \"kubernetes.io/projected/987a223d-f20b-4288-bd46-cfaecfbd13c7-kube-api-access-d8s2t\") pod \"catalog-operator-68c6474976-6hvvv\" (UID: \"987a223d-f20b-4288-bd46-cfaecfbd13c7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.333667 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqxb\" (UniqueName: \"kubernetes.io/projected/4ed99dad-799d-4601-b839-67fa75f22951-kube-api-access-6tqxb\") pod \"csi-hostpathplugin-kv9nd\" (UID: \"4ed99dad-799d-4601-b839-67fa75f22951\") " pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.340794 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv4n7"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.340840 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.343568 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxvzd\" (UniqueName: \"kubernetes.io/projected/5e256d34-28bd-40a7-a14b-d76d21fbea56-kube-api-access-wxvzd\") pod \"packageserver-d55dfcdfc-hsnjj\" (UID: \"5e256d34-28bd-40a7-a14b-d76d21fbea56\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.350075 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gd77z"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.356523 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.356575 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hp9ll"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.359353 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.359739 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.859722621 +0000 UTC m=+148.750418124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.370168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2rp\" (UniqueName: \"kubernetes.io/projected/ce78ab98-f777-4d37-a63f-7c58b2281d8e-kube-api-access-kv2rp\") pod \"service-ca-9c57cc56f-s5qm9\" (UID: \"ce78ab98-f777-4d37-a63f-7c58b2281d8e\") " pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:55 crc kubenswrapper[4687]: W1203 17:41:55.382028 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa7fe3b_4230_4cbe_a1f5_461458f1d95d.slice/crio-2710a71d01594bffadb8bcbaf1cf893a0d2959ac7f1ee915c01fdde3f90acf13 WatchSource:0}: Error finding container 2710a71d01594bffadb8bcbaf1cf893a0d2959ac7f1ee915c01fdde3f90acf13: Status 404 returned error can't find the container with id 2710a71d01594bffadb8bcbaf1cf893a0d2959ac7f1ee915c01fdde3f90acf13 Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.385093 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-ttkxf"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.386830 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.390606 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgp28\" (UniqueName: \"kubernetes.io/projected/4a33ff84-0bdb-4f03-a96d-40bd65bc3b95-kube-api-access-sgp28\") pod \"dns-operator-744455d44c-v4cqf\" (UID: \"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.422858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjp5v\" (UniqueName: \"kubernetes.io/projected/c54388db-0d69-415b-99b8-e60ac35caac2-kube-api-access-bjp5v\") pod \"olm-operator-6b444d44fb-7bs82\" (UID: \"c54388db-0d69-415b-99b8-e60ac35caac2\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.424724 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.427892 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42x2j\" (UniqueName: \"kubernetes.io/projected/850c0a70-321b-4889-85d5-9873c7d1cdad-kube-api-access-42x2j\") pod \"migrator-59844c95c7-kt7gh\" (UID: \"850c0a70-321b-4889-85d5-9873c7d1cdad\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.441996 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.459453 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.460427 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.460607 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.960584148 +0000 UTC m=+148.851279581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.460902 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.461386 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:55.961366182 +0000 UTC m=+148.852061615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.468488 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.476181 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.490927 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.497057 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6bk\" (UniqueName: \"kubernetes.io/projected/e248449e-8a3d-418a-8f0f-0b8484d27c39-kube-api-access-jl6bk\") pod \"control-plane-machine-set-operator-78cbb6b69f-xv2xd\" (UID: \"e248449e-8a3d-418a-8f0f-0b8484d27c39\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.500645 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.524282 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.525863 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.540628 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.545375 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.547949 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.556944 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pc4n2" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.577853 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.579508 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.079478886 +0000 UTC m=+148.970174319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.585505 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.610020 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.679763 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.680148 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.180136013 +0000 UTC m=+149.070831446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.751495 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.780434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.780714 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.280694535 +0000 UTC m=+149.171389968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.780743 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.781017 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.28100926 +0000 UTC m=+149.171704693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.882111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.882761 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.382706594 +0000 UTC m=+149.273402037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.883682 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.887694 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.387659877 +0000 UTC m=+149.278355310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.898739 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.917630 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.936823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb"] Dec 03 17:41:55 crc kubenswrapper[4687]: I1203 17:41:55.985018 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:55 crc kubenswrapper[4687]: E1203 17:41:55.985215 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.485196785 +0000 UTC m=+149.375892218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.086858 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.087332 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.587315607 +0000 UTC m=+149.478011040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.188395 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.188556 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.6885204 +0000 UTC m=+149.579215833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.188596 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.188881 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.688867315 +0000 UTC m=+149.579562748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.223510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" event={"ID":"d5ac8a5c-1fe7-426d-a2f3-819000c75add","Type":"ContainerStarted","Data":"16cfeff26eed8d1a28fabaf7b121fb79ea20e6761a79e82220d2c98e7e4098a2"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.224910 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" event={"ID":"122d3933-8a23-4268-b5e6-9908f55537c0","Type":"ContainerStarted","Data":"90f1d24f6d1af4fdd82018218eb27f31c70ee3def9930dfda3980b1bcb71a77a"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.227013 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" event={"ID":"07c1f9ec-786b-4f7b-9244-cb29ea924da9","Type":"ContainerStarted","Data":"13f9bf5bd11e60ba97cda34a6c62eefca5c1802ae7450fa218cfdf1af2d00526"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.228273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" event={"ID":"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b","Type":"ContainerStarted","Data":"ab6731350f7206c5667430144007069d7933049985bd786c8f3881d1a3074e16"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.229290 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" event={"ID":"646228e4-463e-4aed-a466-afb944163282","Type":"ContainerStarted","Data":"e7849d6ad4806cc26bf10bece769323b957990a06c1fd72750344b9dd6b954aa"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.238677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4bjp6" event={"ID":"d5d70eb6-6676-49c2-8853-55084c991036","Type":"ContainerStarted","Data":"039ee6e5fcf8d655c526660e3366bdd3a3bdfca4675ae28e4033160c2977dece"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.240769 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" event={"ID":"7e85c769-bd22-49ed-b5b4-8bfd40d7027a","Type":"ContainerStarted","Data":"09e5424fc4e6866aa6d7e4125bc9ebad7b05576dd88cd8208a21803148a68add"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.244220 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" event={"ID":"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b","Type":"ContainerStarted","Data":"cc8f1d4f1b270b52e5af6402477e29bae96fa95dfeb04f65196e59b9ad310213"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.244250 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" event={"ID":"bcfb21f2-e1fe-42f0-b166-a2f50847cc6b","Type":"ContainerStarted","Data":"da52196a0deef63164638667971b7d3d7b02ab5e115b560f7c91f5954e46a699"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.246102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"aa516a595f213643cc0e1a7e04e2159d2bc6f2dac4be73284895bd27e9eed0a7"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.247587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" event={"ID":"4899f97c-1e4f-4359-a5d4-427f5bd650a4","Type":"ContainerStarted","Data":"a7a52524325a21ab40bee056e87fc67fd2e6dc9aff14e16d16b316c87af68b9a"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.258625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1e09795f76e40e1e3c9f8e7f1b8e9a2cc68e245f681db3748a6c3a419b64fb63"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.259011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.266553 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" event={"ID":"14170176-819b-413a-ae4b-8b62d7b606ba","Type":"ContainerStarted","Data":"fd108c9fa86dabc1396ff75e6ab2f969aad6ee8d72bcf794a9b5775cfda17167"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.268267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" event={"ID":"5df036a4-ff70-4a7c-8575-cb8c605cef1b","Type":"ContainerStarted","Data":"b0de473b73aace231b9a16cc267284d27684e5bc98f81e36685cb6880022c056"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.269227 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" event={"ID":"11f7e8b6-ef2e-48ca-b841-f3df95c775be","Type":"ContainerStarted","Data":"25c0190779d503cfb36bd820aff800eb9879df5a67360489ab460ed0638b7b96"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.270330 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrxg4" event={"ID":"2fa7fe3b-4230-4cbe-a1f5-461458f1d95d","Type":"ContainerStarted","Data":"2710a71d01594bffadb8bcbaf1cf893a0d2959ac7f1ee915c01fdde3f90acf13"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.273635 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" event={"ID":"93874fcd-039f-4572-9f35-24c20dfd93ce","Type":"ContainerStarted","Data":"4530bab25fa80e82d9bcce71cc16d1868f3f142b76d886f91f6f7cd622e6152e"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.276498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"52477fdcc40892fc821e02aba91709250eace875f03034a7ec41201aac0471bf"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.290082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.290242 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.790224834 +0000 UTC m=+149.680920267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.290537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.291355 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.791334784 +0000 UTC m=+149.682030277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.323697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-phlmz" event={"ID":"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4","Type":"ContainerStarted","Data":"183d3ce3aa1c1fa76b47eabd80d2188df362e4f311842882d88260f2d1d1a95f"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.353896 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" event={"ID":"f2b2ecfc-7839-4364-9e65-988bb4f666f5","Type":"ContainerStarted","Data":"a72a4dd18c94b8e9d57406e0bb49a0e2996f1caa1d9e2d0a3c0de07652afe288"} Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.353978 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.365011 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.391253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.391427 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.891406375 +0000 UTC m=+149.782101808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.401164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.402553 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:56.902539786 +0000 UTC m=+149.793235219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.494069 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-phlmz" podStartSLOduration=5.494046972 podStartE2EDuration="5.494046972s" podCreationTimestamp="2025-12-03 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:56.488589726 +0000 UTC m=+149.379285159" watchObservedRunningTime="2025-12-03 17:41:56.494046972 +0000 UTC m=+149.384742405" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.494595 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5f7jg" podStartSLOduration=129.494587556 podStartE2EDuration="2m9.494587556s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:56.426426461 +0000 UTC m=+149.317121894" watchObservedRunningTime="2025-12-03 17:41:56.494587556 +0000 UTC m=+149.385282999" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.502738 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.503102 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.003082298 +0000 UTC m=+149.893777731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.522783 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-q8fqs" podStartSLOduration=128.522766713 podStartE2EDuration="2m8.522766713s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:56.522094033 +0000 UTC m=+149.412789466" watchObservedRunningTime="2025-12-03 17:41:56.522766713 +0000 UTC m=+149.413462146" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.553731 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" podStartSLOduration=128.553715076 podStartE2EDuration="2m8.553715076s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:56.551982757 +0000 UTC m=+149.442678190" watchObservedRunningTime="2025-12-03 17:41:56.553715076 +0000 UTC m=+149.444410509" Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.604438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.604773 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.104759562 +0000 UTC m=+149.995454995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.706162 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.706324 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.206296269 +0000 UTC m=+150.096991702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.706380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.706743 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.206729988 +0000 UTC m=+150.097425421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.807042 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.807393 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.307360455 +0000 UTC m=+150.198055918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:56 crc kubenswrapper[4687]: I1203 17:41:56.908111 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:56 crc kubenswrapper[4687]: E1203 17:41:56.908511 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.408494103 +0000 UTC m=+150.299189536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.008966 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.009908 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.509890803 +0000 UTC m=+150.400586236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.033708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.045985 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lg5jg"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.106888 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.110702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.111243 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.611231742 +0000 UTC m=+150.501927175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.148261 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.150180 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.215221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.215651 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.715636608 +0000 UTC m=+150.606332041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.323094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.323447 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.823435596 +0000 UTC m=+150.714131029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.390015 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" event={"ID":"07c1f9ec-786b-4f7b-9244-cb29ea924da9","Type":"ContainerStarted","Data":"ff765ba84b06ba6653f08c527014b793df76bd0938577a5927bd6d7ca7c75fed"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.393347 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" event={"ID":"122d3933-8a23-4268-b5e6-9908f55537c0","Type":"ContainerStarted","Data":"53f0911fee764b6b2df16459d9bbc6a781cf9cab490f98d36ee775c31cd4ae70"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.403097 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" event={"ID":"55068ff7-230e-4368-aa62-4b4262d614ce","Type":"ContainerStarted","Data":"904932dee6bf9150fdd61982f97a0e2e1fbba6983ceceba56c854970a1cea0a2"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.420919 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e85c769-bd22-49ed-b5b4-8bfd40d7027a" containerID="45684e31a4048cf6d37c83ffc8c8411765af671192f31aa52bc8cdab4f046455" exitCode=0 Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.424004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.424546 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:57.924529793 +0000 UTC m=+150.815225226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.431313 4687 patch_prober.go:28] interesting pod/console-operator-58897d9998-ttkxf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.431370 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" podUID="4899f97c-1e4f-4359-a5d4-427f5bd650a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.450281 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-hj48t" podStartSLOduration=129.450263261 podStartE2EDuration="2m9.450263261s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:57.449970718 +0000 UTC m=+150.340666151" watchObservedRunningTime="2025-12-03 17:41:57.450263261 +0000 UTC m=+150.340958684" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.450602 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.450643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" event={"ID":"93874fcd-039f-4572-9f35-24c20dfd93ce","Type":"ContainerStarted","Data":"01bade93cb1f587b4a7f8fff81e3b1f3fa6c7f4638e7f27d8be3578781452922"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.450662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" event={"ID":"7e85c769-bd22-49ed-b5b4-8bfd40d7027a","Type":"ContainerDied","Data":"45684e31a4048cf6d37c83ffc8c8411765af671192f31aa52bc8cdab4f046455"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.461471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" event={"ID":"546db82f-4ba0-4b13-a501-064e42360219","Type":"ContainerStarted","Data":"a76ef5f2a7990d7d54fbe2901c2e696c34eb1860033fd2faa208613065dfabc7"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.484913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" event={"ID":"a8580d5e-6e2f-486d-ba5a-eb267a1f2e7b","Type":"ContainerStarted","Data":"df9da04507721f856198f370279e24994f22498bd1dc52ddc8a559513156e765"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.490048 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.506499 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.525689 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" event={"ID":"ccc53c5b-df64-41ca-bee7-9497d7082fec","Type":"ContainerStarted","Data":"a3cc12412d3ea754bc1440163eff30297a02afe9d5a665c37d5c92ce6f0025d0"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.525736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" event={"ID":"ccc53c5b-df64-41ca-bee7-9497d7082fec","Type":"ContainerStarted","Data":"11f98ba4d24b00a8f733938958cadde5b1b0349db7716b2ce992a845a715e593"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.528952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.530029 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.030016869 +0000 UTC m=+150.920712292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.559999 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" event={"ID":"483b2f56-58e8-4a3a-9b7f-1126d1da77d2","Type":"ContainerStarted","Data":"bb384dd148e44ff4f6e71ac846e909a6fc371951ef2ff95ea0c36e5f39fc8696"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.560790 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" podStartSLOduration=130.560765202 podStartE2EDuration="2m10.560765202s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:57.527956705 +0000 UTC m=+150.418652138" watchObservedRunningTime="2025-12-03 17:41:57.560765202 +0000 UTC m=+150.451460625" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.574639 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hp9ll" podStartSLOduration=130.574608654 podStartE2EDuration="2m10.574608654s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:57.555968145 +0000 UTC m=+150.446663578" watchObservedRunningTime="2025-12-03 17:41:57.574608654 +0000 UTC m=+150.465304087" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.576055 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2wpcl"] Dec 03 17:41:57 crc kubenswrapper[4687]: W1203 17:41:57.605648 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod940c5227_c4b5_4142_92b6_63b408453159.slice/crio-0fd2a39812274f051c48cc018a4c1b9e70837f3ac52f0e50b3d6802e0d672e7a WatchSource:0}: Error finding container 0fd2a39812274f051c48cc018a4c1b9e70837f3ac52f0e50b3d6802e0d672e7a: Status 404 returned error can't find the container with id 0fd2a39812274f051c48cc018a4c1b9e70837f3ac52f0e50b3d6802e0d672e7a Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.611441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.641851 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rkbht" podStartSLOduration=129.641829178 podStartE2EDuration="2m9.641829178s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:57.589692772 +0000 UTC m=+150.480388205" watchObservedRunningTime="2025-12-03 17:41:57.641829178 +0000 UTC m=+150.532524611" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.642533 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.643277 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.143259172 +0000 UTC m=+151.033954605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.676050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.676097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.686105 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4cqf"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.689385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-phlmz" event={"ID":"bdadef38-6fcb-4b4c-bdea-41f6b5f0fcd4","Type":"ContainerStarted","Data":"0c7146858def0d6b3606bf6389a9f9b165aca976db60fc0147f760da0deea40f"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.703429 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" event={"ID":"8f40af59-1544-4694-a1b7-2a6eee4bc2c8","Type":"ContainerStarted","Data":"2926638cd60b70c77dc177d48d73ffc41631216d7732450f0e96a24d411f7cf2"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.713044 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kv9nd"] Dec 03 17:41:57 crc kubenswrapper[4687]: W1203 17:41:57.728883 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fa9972a_0306_4ae6_9cef_d7d98214d25c.slice/crio-5918806e1524fb93dd68e2bae3c22858b47855c0abbeab40979314818d1253cd WatchSource:0}: Error finding container 5918806e1524fb93dd68e2bae3c22858b47855c0abbeab40979314818d1253cd: Status 404 returned error can't find the container with id 5918806e1524fb93dd68e2bae3c22858b47855c0abbeab40979314818d1253cd Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.729066 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" event={"ID":"a827c9a9-8ab5-4135-b82d-032a234d0ab0","Type":"ContainerStarted","Data":"72816cf6a5ae9b66f57bc18c3c488a652b0d5121c97f524bda52464dde6c0b3b"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.729102 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" event={"ID":"a827c9a9-8ab5-4135-b82d-032a234d0ab0","Type":"ContainerStarted","Data":"efa86322dffcbc76be4aebf6df7fef6e1762282866e09b22af70cba52528e69f"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.745522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.745876 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.245862807 +0000 UTC m=+151.136558240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.752035 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-s5qm9"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.753588 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.754504 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pc4n2"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.754515 4687 generic.go:334] "Generic (PLEG): container finished" podID="14170176-819b-413a-ae4b-8b62d7b606ba" containerID="8799fb40195449e693df53a1121899dd4be3d486022e11eb304740f503912f7c" exitCode=0 Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.754536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" event={"ID":"14170176-819b-413a-ae4b-8b62d7b606ba","Type":"ContainerDied","Data":"8799fb40195449e693df53a1121899dd4be3d486022e11eb304740f503912f7c"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.764566 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7v42"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.765635 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.770802 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5ac8a5c-1fe7-426d-a2f3-819000c75add" containerID="f72c83b894db516a297bc5c812e9f134c5d173735416401007001f21c1f4ffd3" exitCode=0 Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.771221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" event={"ID":"d5ac8a5c-1fe7-426d-a2f3-819000c75add","Type":"ContainerDied","Data":"f72c83b894db516a297bc5c812e9f134c5d173735416401007001f21c1f4ffd3"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.778672 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.782509 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerStarted","Data":"d2ba40971f071f685d512cd57625f2fbc29ee9d4492eb235d1f180156181dc11"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.793797 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7a015a660183f663f96d4101dd4e0c1d7083e6697d0eec8333050fb769c5753e"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.803993 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.814777 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh"] Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.837261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-zrxg4" event={"ID":"2fa7fe3b-4230-4cbe-a1f5-461458f1d95d","Type":"ContainerStarted","Data":"e0234d49eb74e638c93305bd9b8ff446dabd0b07831b825edb2a80b3945a1e45"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.839685 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.844905 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrxg4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.844957 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrxg4" podUID="2fa7fe3b-4230-4cbe-a1f5-461458f1d95d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.846153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.849253 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.349231726 +0000 UTC m=+151.239927159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.886668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" event={"ID":"f211703d-9bfe-4c35-a761-4f0a572ff317","Type":"ContainerStarted","Data":"35e2403eef33b986b7de8d696f17c6c041d58916c49987d347acacc39ddf535d"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.954732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" event={"ID":"646228e4-463e-4aed-a466-afb944163282","Type":"ContainerStarted","Data":"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523"} Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.954792 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.956722 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:57 crc kubenswrapper[4687]: E1203 17:41:57.958535 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.458515812 +0000 UTC m=+151.349211235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:57 crc kubenswrapper[4687]: I1203 17:41:57.992785 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.018808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" event={"ID":"5df036a4-ff70-4a7c-8575-cb8c605cef1b","Type":"ContainerStarted","Data":"b15fe3496778ad5986744c97cdf02551912147a4894ca596991849aa380af97f"} Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.044719 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ca4aa14c5768c93d331b471be4db5bf95a75bea1ffe54d1dfa6c236039e23d8b"} Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.070558 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xpt4f" podStartSLOduration=130.070542131 podStartE2EDuration="2m10.070542131s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:58.068847165 +0000 UTC m=+150.959542598" watchObservedRunningTime="2025-12-03 17:41:58.070542131 +0000 UTC m=+150.961237564" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.077570 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.079633 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.579593987 +0000 UTC m=+151.470289430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.088060 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" event={"ID":"11f7e8b6-ef2e-48ca-b841-f3df95c775be","Type":"ContainerStarted","Data":"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0"} Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.088736 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.090821 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.091228 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.591213151 +0000 UTC m=+151.481908584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.120198 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-4bjp6" event={"ID":"d5d70eb6-6676-49c2-8853-55084c991036","Type":"ContainerStarted","Data":"c5fd1ff2df9ef97c36fb636f2879fd26d8d1121800fb32fe0e1e3cae296f63dc"} Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.192170 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.194434 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.694405781 +0000 UTC m=+151.585101224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.293716 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qxrl7" podStartSLOduration=131.293689327 podStartE2EDuration="2m11.293689327s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:58.292177789 +0000 UTC m=+151.182873232" watchObservedRunningTime="2025-12-03 17:41:58.293689327 +0000 UTC m=+151.184384750" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.295409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.302419 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.802390498 +0000 UTC m=+151.693085931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.397275 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.397954 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:58.897936837 +0000 UTC m=+151.788632270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.460485 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.472907 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:41:58 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:41:58 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:41:58 crc kubenswrapper[4687]: healthz check failed Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.472991 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.504799 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.505169 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.005156379 +0000 UTC m=+151.895851812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.605949 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.606613 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.106598613 +0000 UTC m=+151.997294046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.675242 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" podStartSLOduration=131.675205318 podStartE2EDuration="2m11.675205318s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:58.626633663 +0000 UTC m=+151.517329086" watchObservedRunningTime="2025-12-03 17:41:58.675205318 +0000 UTC m=+151.565900751" Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.727048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.727401 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.227389015 +0000 UTC m=+152.118084448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.829956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.830232 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.33021602 +0000 UTC m=+152.220911453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.931129 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:58 crc kubenswrapper[4687]: E1203 17:41:58.931505 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.431487106 +0000 UTC m=+152.322182539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:58 crc kubenswrapper[4687]: I1203 17:41:58.964078 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-zrxg4" podStartSLOduration=130.964060531 podStartE2EDuration="2m10.964060531s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:58.963685214 +0000 UTC m=+151.854380647" watchObservedRunningTime="2025-12-03 17:41:58.964060531 +0000 UTC m=+151.854755964" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.033737 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.034101 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.534085151 +0000 UTC m=+152.424780574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.099254 4687 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-nv4n7 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.099326 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.099454 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hwwgb" podStartSLOduration=131.09942968 podStartE2EDuration="2m11.09942968s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.064870975 +0000 UTC m=+151.955566408" watchObservedRunningTime="2025-12-03 17:41:59.09942968 +0000 UTC m=+151.990125113" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.136060 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.136545 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.636520788 +0000 UTC m=+152.527216221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.161116 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" podStartSLOduration=131.161096444 podStartE2EDuration="2m11.161096444s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.100710848 +0000 UTC m=+151.991406281" watchObservedRunningTime="2025-12-03 17:41:59.161096444 +0000 UTC m=+152.051791877" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.217389 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-4bjp6" podStartSLOduration=131.217356834 podStartE2EDuration="2m11.217356834s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.170305408 +0000 UTC m=+152.061000851" watchObservedRunningTime="2025-12-03 17:41:59.217356834 +0000 UTC m=+152.108052277" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.236736 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.237117 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.737094682 +0000 UTC m=+152.627790115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.238519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" event={"ID":"940c5227-c4b5-4142-92b6-63b408453159","Type":"ContainerStarted","Data":"29e597726a5b382bf8ea5d6f61e0fe9fee49c9706e0c6f2b64974a3647f05712"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.238562 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" event={"ID":"940c5227-c4b5-4142-92b6-63b408453159","Type":"ContainerStarted","Data":"0fd2a39812274f051c48cc018a4c1b9e70837f3ac52f0e50b3d6802e0d672e7a"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.250577 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" event={"ID":"850c0a70-321b-4889-85d5-9873c7d1cdad","Type":"ContainerStarted","Data":"3d074b833ee6d1ac6c1e7611ffcbb3a70009f88aac0de9cff751a1982fac0b92"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.250646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" event={"ID":"850c0a70-321b-4889-85d5-9873c7d1cdad","Type":"ContainerStarted","Data":"20e56936de37fb11456e743e134a1ea1a5acca00ba5c2596fed31d6e2f2b26c2"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.265823 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" podStartSLOduration=132.265806234 podStartE2EDuration="2m12.265806234s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.263671097 +0000 UTC m=+152.154366530" watchObservedRunningTime="2025-12-03 17:41:59.265806234 +0000 UTC m=+152.156501667" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.290348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" event={"ID":"8f40af59-1544-4694-a1b7-2a6eee4bc2c8","Type":"ContainerStarted","Data":"49d57e453bab18e73154252bce06a3c36d8bb4eae406626e3e7097f9d55b7c51"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.294543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" event={"ID":"f211703d-9bfe-4c35-a761-4f0a572ff317","Type":"ContainerStarted","Data":"97abc3b58deec034b417874a1c66829c56d9af8a66e499cc830bf046eaa4df63"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.317085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" event={"ID":"c54388db-0d69-415b-99b8-e60ac35caac2","Type":"ContainerStarted","Data":"1e9b3ff4a7aa3386f1465518db3fda36af2acba31bc40df3614141e39f0243b1"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.318221 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.328369 4687 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7bs82 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.328431 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" podUID="c54388db-0d69-415b-99b8-e60ac35caac2" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": dial tcp 10.217.0.34:8443: connect: connection refused" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.340377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.342513 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.842498973 +0000 UTC m=+152.733194406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.353943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" event={"ID":"229175a9-fd55-4fd3-a02f-d5087886fe2b","Type":"ContainerStarted","Data":"31e0369179425a9f9f1cac356caaa456b6ba4023da613f2e40d5bd81c49030e2"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.367497 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" event={"ID":"4ed99dad-799d-4601-b839-67fa75f22951","Type":"ContainerStarted","Data":"86aeb7220471ca17dcf1aaf2fccfdf5575a6084fa3cc721b081c67073881eded"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.383891 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8dr4q" podStartSLOduration=131.383874794 podStartE2EDuration="2m11.383874794s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.382410808 +0000 UTC m=+152.273106241" watchObservedRunningTime="2025-12-03 17:41:59.383874794 +0000 UTC m=+152.274570227" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.438275 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-74znc" podStartSLOduration=131.43825686 podStartE2EDuration="2m11.43825686s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.413456155 +0000 UTC m=+152.304151588" watchObservedRunningTime="2025-12-03 17:41:59.43825686 +0000 UTC m=+152.328952293" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.440978 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.441065 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.941044156 +0000 UTC m=+152.831739589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.441072 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" podStartSLOduration=131.441058966 podStartE2EDuration="2m11.441058966s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.439570079 +0000 UTC m=+152.330265512" watchObservedRunningTime="2025-12-03 17:41:59.441058966 +0000 UTC m=+152.331754399" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.438716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" event={"ID":"4899f97c-1e4f-4359-a5d4-427f5bd650a4","Type":"ContainerStarted","Data":"4271edaacb317eb996137c53761c0f38e981e62d36ea41316725e934570c7577"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.441523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.442542 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:41:59.942530423 +0000 UTC m=+152.833225856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.450720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" event={"ID":"483b2f56-58e8-4a3a-9b7f-1126d1da77d2","Type":"ContainerStarted","Data":"2e806763e39c58e5afca766da62b8535472cdfcde96bfb4690a21668506adff9"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.462417 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-ttkxf" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.469442 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:41:59 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:41:59 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:41:59 crc kubenswrapper[4687]: healthz check failed Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.469500 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.484543 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" event={"ID":"15c2c1d3-31da-423e-8e09-8d11382908b5","Type":"ContainerStarted","Data":"73a9aa36634349ca1d5198141060e352391b5bf96283439536260eb00a6afb4a"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.484598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" event={"ID":"15c2c1d3-31da-423e-8e09-8d11382908b5","Type":"ContainerStarted","Data":"eefa490d39c2080b2f4c8e20c26346ce5e09dba6aff31f45ed11db73a005505f"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.535101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2wpcl" event={"ID":"1fa9972a-0306-4ae6-9cef-d7d98214d25c","Type":"ContainerStarted","Data":"787ccf42e77f6757949533433795152bf5b08f62ad97b16f753e64bb69173cc9"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.535165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2wpcl" event={"ID":"1fa9972a-0306-4ae6-9cef-d7d98214d25c","Type":"ContainerStarted","Data":"5918806e1524fb93dd68e2bae3c22858b47855c0abbeab40979314818d1253cd"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.543511 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.544098 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.044075109 +0000 UTC m=+152.934770542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.560554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" event={"ID":"ebe116b4-b00f-4f26-8456-cfb815889fd0","Type":"ContainerStarted","Data":"c1680c71de7c4c6383471d4b8ad0c69e6ae9854190942a02402af92bbf0322be"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.590041 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lg5jg" podStartSLOduration=131.590018606 podStartE2EDuration="2m11.590018606s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.51919122 +0000 UTC m=+152.409886673" watchObservedRunningTime="2025-12-03 17:41:59.590018606 +0000 UTC m=+152.480714049" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.641152 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" podStartSLOduration=132.641114575 podStartE2EDuration="2m12.641114575s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.591281483 +0000 UTC m=+152.481976916" watchObservedRunningTime="2025-12-03 17:41:59.641114575 +0000 UTC m=+152.531810008" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.645859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.646464 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.146445044 +0000 UTC m=+153.037140467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.651175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" event={"ID":"ce78ab98-f777-4d37-a63f-7c58b2281d8e","Type":"ContainerStarted","Data":"112283482ad8ffc1ded9995eb4a5be8db801489fb896ecbbddab271124326be9"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.669355 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" event={"ID":"987a223d-f20b-4288-bd46-cfaecfbd13c7","Type":"ContainerStarted","Data":"c3576898cc5e80d006af74cfe701b2870e07642c653d7b67b824939238bd2122"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.669408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" event={"ID":"987a223d-f20b-4288-bd46-cfaecfbd13c7","Type":"ContainerStarted","Data":"34ef4188734205ce5d1249f02f4759aba6febd407ef8fdea90e25c5cfba31ab4"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.670246 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.683659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mkvps" event={"ID":"1c55e5e2-5437-468e-9410-605afa2612d9","Type":"ContainerStarted","Data":"4aa0e299c7beecfd0e34299d3a8654324887bf2e0ae08706fa7d8143659c0607"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.691224 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wskjl" podStartSLOduration=131.691208417 podStartE2EDuration="2m11.691208417s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.68902953 +0000 UTC m=+152.579724963" watchObservedRunningTime="2025-12-03 17:41:59.691208417 +0000 UTC m=+152.581903850" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.697763 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.710457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" event={"ID":"e248449e-8a3d-418a-8f0f-0b8484d27c39","Type":"ContainerStarted","Data":"b7bfdd2874b87eabe487d4a18ad58bc645480a600be4769da9a236c48baf2bcf"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.720235 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" podStartSLOduration=131.720219793 podStartE2EDuration="2m11.720219793s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.718855771 +0000 UTC m=+152.609551204" watchObservedRunningTime="2025-12-03 17:41:59.720219793 +0000 UTC m=+152.610915226" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.750677 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.751307 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.25129218 +0000 UTC m=+153.141987613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.755055 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2wpcl" podStartSLOduration=8.755043659 podStartE2EDuration="8.755043659s" podCreationTimestamp="2025-12-03 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.754076975 +0000 UTC m=+152.644772408" watchObservedRunningTime="2025-12-03 17:41:59.755043659 +0000 UTC m=+152.645739092" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.761494 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qxn9b" event={"ID":"5df036a4-ff70-4a7c-8575-cb8c605cef1b","Type":"ContainerStarted","Data":"7da6d9ac57710b4a8c18af5d28b3dfa546f8964cdc12316154c0998a0a2e9248"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.782032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" event={"ID":"ff606c22-18f6-4abd-b36a-4650378861d1","Type":"ContainerStarted","Data":"d19793b6fe9bc3083642cc35edba137077ff18e39fe4850abd750b340d0cc2a2"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.796334 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mkvps" podStartSLOduration=131.796318855 podStartE2EDuration="2m11.796318855s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.785337172 +0000 UTC m=+152.676032605" watchObservedRunningTime="2025-12-03 17:41:59.796318855 +0000 UTC m=+152.687014288" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.799845 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" event={"ID":"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95","Type":"ContainerStarted","Data":"25b12e241915e7b0e7a3586641acc6fdf4d960f1dd034433b49d9569f832c5a2"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.818249 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6hvvv" podStartSLOduration=131.818232221 podStartE2EDuration="2m11.818232221s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.815340661 +0000 UTC m=+152.706036094" watchObservedRunningTime="2025-12-03 17:41:59.818232221 +0000 UTC m=+152.708927654" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.823866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerStarted","Data":"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.824717 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.835061 4687 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-774pl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.835108 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.835642 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" event={"ID":"5e256d34-28bd-40a7-a14b-d76d21fbea56","Type":"ContainerStarted","Data":"07e844b544b2d1c410212f6663a30cd27c7553579caa2edf268a038cb417c5d6"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.835669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" event={"ID":"5e256d34-28bd-40a7-a14b-d76d21fbea56","Type":"ContainerStarted","Data":"dc24dd68ae5a1ab80f248dfb6fc6c59c3836fff59f392b1bb4305a4ab2a29608"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.836403 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.840862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" event={"ID":"55068ff7-230e-4368-aa62-4b4262d614ce","Type":"ContainerStarted","Data":"b6fcb33caf63d719f9f2663a6e16747f4ccac2ea37f7b785c4404859440c7f98"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.840894 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" event={"ID":"55068ff7-230e-4368-aa62-4b4262d614ce","Type":"ContainerStarted","Data":"45ab2c2ed175ea928335f7dad067c08319bbc5583591a25773db6cded5385de3"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.852345 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" podStartSLOduration=131.852327214 podStartE2EDuration="2m11.852327214s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.844084764 +0000 UTC m=+152.734780197" watchObservedRunningTime="2025-12-03 17:41:59.852327214 +0000 UTC m=+152.743022647" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.858411 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.859773 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.359756579 +0000 UTC m=+153.250452012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.870040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pc4n2" event={"ID":"355affcf-dc10-4be7-9500-136e8d4e795b","Type":"ContainerStarted","Data":"27612c2157beebcb1c655b07d45aa1ffdd5c4220a498d4668863ec4d5d4dd78b"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.871808 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" event={"ID":"546db82f-4ba0-4b13-a501-064e42360219","Type":"ContainerStarted","Data":"1462d9b5cb32daaf88490cd1074a33a99c12d9c76e9179dbcfa4eccf9deba518"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.871834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" event={"ID":"546db82f-4ba0-4b13-a501-064e42360219","Type":"ContainerStarted","Data":"3f099a4a1dc525d17fc7fa7a4120efbcd543533a647dccdd42c841cdb0c858e5"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.921414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" event={"ID":"7e85c769-bd22-49ed-b5b4-8bfd40d7027a","Type":"ContainerStarted","Data":"756eed21cccc4e448311155cac3d0a90fd5f521ddfc0ab4a95d3a1a0602f0f4a"} Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.921812 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrxg4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.921842 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrxg4" podUID="2fa7fe3b-4230-4cbe-a1f5-461458f1d95d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.922469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.959812 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:41:59 crc kubenswrapper[4687]: E1203 17:41:59.961750 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.461700485 +0000 UTC m=+153.352395918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:41:59 crc kubenswrapper[4687]: I1203 17:41:59.969384 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" podStartSLOduration=131.969366929 podStartE2EDuration="2m11.969366929s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.967713225 +0000 UTC m=+152.858408648" watchObservedRunningTime="2025-12-03 17:41:59.969366929 +0000 UTC m=+152.860062362" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.022233 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfdh" podStartSLOduration=132.022211656 podStartE2EDuration="2m12.022211656s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:41:59.998390005 +0000 UTC m=+152.889085438" watchObservedRunningTime="2025-12-03 17:42:00.022211656 +0000 UTC m=+152.912907089" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.062051 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" podStartSLOduration=132.062026237 podStartE2EDuration="2m12.062026237s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:00.024789672 +0000 UTC m=+152.915485105" watchObservedRunningTime="2025-12-03 17:42:00.062026237 +0000 UTC m=+152.952721670" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.064115 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.064652 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.564629604 +0000 UTC m=+153.455325037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.097354 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" podStartSLOduration=133.097331504 podStartE2EDuration="2m13.097331504s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:00.097211839 +0000 UTC m=+152.987907272" watchObservedRunningTime="2025-12-03 17:42:00.097331504 +0000 UTC m=+152.988026927" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.098042 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ts2g8" podStartSLOduration=132.098035356 podStartE2EDuration="2m12.098035356s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:00.068272548 +0000 UTC m=+152.958967991" watchObservedRunningTime="2025-12-03 17:42:00.098035356 +0000 UTC m=+152.988730789" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.161456 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.208440 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.209043 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.709026079 +0000 UTC m=+153.599721512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.310853 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.311318 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.811296298 +0000 UTC m=+153.701991731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.414922 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.415268 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.915243644 +0000 UTC m=+153.805939077 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.415762 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.416109 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:00.916098102 +0000 UTC m=+153.806793535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.482363 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:00 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:00 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:00 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.482425 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.521636 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.522067 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.022051809 +0000 UTC m=+153.912747232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.623590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.623931 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.12391751 +0000 UTC m=+154.014612943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.725102 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.728616 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.228586848 +0000 UTC m=+154.119282271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.826492 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.826867 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.326850498 +0000 UTC m=+154.217545931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.837349 4687 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-hsnjj container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.837422 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" podUID="5e256d34-28bd-40a7-a14b-d76d21fbea56" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.928492 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.928744 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.42871431 +0000 UTC m=+154.319409753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.928983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:00 crc kubenswrapper[4687]: E1203 17:42:00.929543 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.429535217 +0000 UTC m=+154.320230650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.944248 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" event={"ID":"c54388db-0d69-415b-99b8-e60ac35caac2","Type":"ContainerStarted","Data":"3e943ddf4169f74b1b03b02f0b15db568f85729aa168ac3e441df079afa3dae4"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.953068 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" event={"ID":"229175a9-fd55-4fd3-a02f-d5087886fe2b","Type":"ContainerStarted","Data":"150984b360892eb015641dfb1e742966caec29aed2a63c6032dde89eef62bba1"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.953115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" event={"ID":"229175a9-fd55-4fd3-a02f-d5087886fe2b","Type":"ContainerStarted","Data":"d1d04f1b6af7229a7a10c8140ae8df403819ad4ba01c2127924891d0596c8d5f"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.957245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pc4n2" event={"ID":"355affcf-dc10-4be7-9500-136e8d4e795b","Type":"ContainerStarted","Data":"9f8d318e135d621f6b5e9d799a1dd5e4a9c8aa78f28ebba82f2d9a94315fe314"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.957476 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pc4n2" event={"ID":"355affcf-dc10-4be7-9500-136e8d4e795b","Type":"ContainerStarted","Data":"8f11a49cbb1e37969a2a729e8bd490420010a5dc97daaca3e63f96ced0f830ae"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.957605 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pc4n2" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.960414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xv2xd" event={"ID":"e248449e-8a3d-418a-8f0f-0b8484d27c39","Type":"ContainerStarted","Data":"662379035a06add791dca1a2483d6c45db8dde2cf906e6efa61534ad1181dd4f"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.970262 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7bs82" Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.974901 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" event={"ID":"d5ac8a5c-1fe7-426d-a2f3-819000c75add","Type":"ContainerStarted","Data":"dbcf6abf77e23a6e8c85c3d247a92f0e10bb0d1b21b8d9af4eb6c0cfe18529c7"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.978388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" event={"ID":"4ed99dad-799d-4601-b839-67fa75f22951","Type":"ContainerStarted","Data":"d4004cee8c20c9981ebd8cc0c336678fa92564c6011c318402d749151e281ff3"} Dec 03 17:42:00 crc kubenswrapper[4687]: I1203 17:42:00.993197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" event={"ID":"850c0a70-321b-4889-85d5-9873c7d1cdad","Type":"ContainerStarted","Data":"c9f607594293b53b701581a4bfa0dac623199b3bda44a48c2739e14234fc3197"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.023672 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" event={"ID":"ff606c22-18f6-4abd-b36a-4650378861d1","Type":"ContainerStarted","Data":"3eff49ecd5fbaf39ae9b1a6a19bb0440b2fba278da2a75cc9834ce88dfbab835"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.023749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" event={"ID":"ff606c22-18f6-4abd-b36a-4650378861d1","Type":"ContainerStarted","Data":"ace51d98e3f9fb6bc91090fbc47d56f77bb2de38f1485db8b583c38a43bc8db8"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.025217 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pc4n2" podStartSLOduration=10.025198439 podStartE2EDuration="10.025198439s" podCreationTimestamp="2025-12-03 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.023519694 +0000 UTC m=+153.914215127" watchObservedRunningTime="2025-12-03 17:42:01.025198439 +0000 UTC m=+153.915893872" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.025577 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7v42" podStartSLOduration=133.025571286 podStartE2EDuration="2m13.025571286s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:00.986402744 +0000 UTC m=+153.877098177" watchObservedRunningTime="2025-12-03 17:42:01.025571286 +0000 UTC m=+153.916266709" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.028067 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" event={"ID":"14170176-819b-413a-ae4b-8b62d7b606ba","Type":"ContainerStarted","Data":"23b7d67ef40cbcc0609ec1737626889cd59b5869908c69980bdfa695e313e0ca"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.028101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" event={"ID":"14170176-819b-413a-ae4b-8b62d7b606ba","Type":"ContainerStarted","Data":"7a9811725975178f031fa36ca4e1ee71d8ca56c3397bf9c1889e69da59608d85"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.030085 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.030347 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.5303271 +0000 UTC m=+154.421022543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.030366 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" event={"ID":"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95","Type":"ContainerStarted","Data":"588b8713599c7fa59abe996a98a89180f7070d32a243f2bb9a130a1bdb91ad53"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.030402 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" event={"ID":"4a33ff84-0bdb-4f03-a96d-40bd65bc3b95","Type":"ContainerStarted","Data":"c928eb3c563b872bf0b77f809af3483d5b33950aa60fdc41efec098f5daf7f8a"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.030528 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.031647 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.531636819 +0000 UTC m=+154.422332252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.039004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" event={"ID":"ebe116b4-b00f-4f26-8456-cfb815889fd0","Type":"ContainerStarted","Data":"bedb3387c6edc63685783ef9695f47db6c6285ca969ce19be5eaeb82967515e1"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.039038 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" event={"ID":"ebe116b4-b00f-4f26-8456-cfb815889fd0","Type":"ContainerStarted","Data":"aac50eae5a85c1b817a966b0457c1d6a378b892a66145a703165b4c1b1ac5bf4"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.039482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.041851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mkvps" event={"ID":"1c55e5e2-5437-468e-9410-605afa2612d9","Type":"ContainerStarted","Data":"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.051205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-s5qm9" event={"ID":"ce78ab98-f777-4d37-a63f-7c58b2281d8e","Type":"ContainerStarted","Data":"14b02cb82b6f8ed4040653171cce1c0e0295d59941e2c7e8416c4978ee9055f2"} Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.057514 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.065571 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-s7bxb" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.067289 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" podStartSLOduration=133.067273302 podStartE2EDuration="2m13.067273302s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.065198819 +0000 UTC m=+153.955894262" watchObservedRunningTime="2025-12-03 17:42:01.067273302 +0000 UTC m=+153.957968735" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.102623 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qbspz" podStartSLOduration=133.102608841 podStartE2EDuration="2m13.102608841s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.099813135 +0000 UTC m=+153.990508578" watchObservedRunningTime="2025-12-03 17:42:01.102608841 +0000 UTC m=+153.993304274" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.132377 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.134284 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.634257755 +0000 UTC m=+154.524953238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.197100 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" podStartSLOduration=133.197080261 podStartE2EDuration="2m13.197080261s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.147772273 +0000 UTC m=+154.038467706" watchObservedRunningTime="2025-12-03 17:42:01.197080261 +0000 UTC m=+154.087775694" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.240496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.240840 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.740826998 +0000 UTC m=+154.631522431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.281843 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" podStartSLOduration=134.281824212 podStartE2EDuration="2m14.281824212s" podCreationTimestamp="2025-12-03 17:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.199814764 +0000 UTC m=+154.090510197" watchObservedRunningTime="2025-12-03 17:42:01.281824212 +0000 UTC m=+154.172519645" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.283761 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kt7gh" podStartSLOduration=133.283728038 podStartE2EDuration="2m13.283728038s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.282350296 +0000 UTC m=+154.173045739" watchObservedRunningTime="2025-12-03 17:42:01.283728038 +0000 UTC m=+154.174423471" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.341253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.341452 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.841419293 +0000 UTC m=+154.732114726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.372090 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.372992 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.374748 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.397436 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.423055 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-v4cqf" podStartSLOduration=133.423038054 podStartE2EDuration="2m13.423038054s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:01.419586639 +0000 UTC m=+154.310282072" watchObservedRunningTime="2025-12-03 17:42:01.423038054 +0000 UTC m=+154.313733487" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.443236 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.443281 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.443333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.443358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brg89\" (UniqueName: \"kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.443673 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:01.943657352 +0000 UTC m=+154.834352785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.477935 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:01 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:01 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:01 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.478006 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.544629 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.544841 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.044811521 +0000 UTC m=+154.935506944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.544904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brg89\" (UniqueName: \"kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.544959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.544987 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.545047 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.545455 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.545514 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.545730 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.045717962 +0000 UTC m=+154.936413525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.593444 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.594555 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.595605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brg89\" (UniqueName: \"kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89\") pod \"certified-operators-9rknl\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.599510 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.628271 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.646710 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.646892 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.146866672 +0000 UTC m=+155.037562105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.647297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.647621 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.147612916 +0000 UTC m=+155.038308349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.689469 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.748727 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.748900 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.748924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kfz\" (UniqueName: \"kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.748974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.749076 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.249061248 +0000 UTC m=+155.139756681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.763553 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.764826 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.784037 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851660 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6kfz\" (UniqueName: \"kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851729 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnssj\" (UniqueName: \"kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.851998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.852032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.853004 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.352993263 +0000 UTC m=+155.243688696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.853264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.853807 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.902928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6kfz\" (UniqueName: \"kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz\") pod \"community-operators-d59r5\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.915655 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.953539 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.953793 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.453758395 +0000 UTC m=+155.344453848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.954023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.954709 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.954803 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnssj\" (UniqueName: \"kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.954903 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.955490 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.955647 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:01 crc kubenswrapper[4687]: E1203 17:42:01.956012 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.455999266 +0000 UTC m=+155.346694709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.983434 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.984577 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.989380 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnssj\" (UniqueName: \"kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj\") pod \"certified-operators-f4dqh\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.994629 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-hsnjj" Dec 03 17:42:01 crc kubenswrapper[4687]: I1203 17:42:01.998596 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.057312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.057569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5jb\" (UniqueName: \"kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.057670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.057692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.057791 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.557773764 +0000 UTC m=+155.448469197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.082961 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.085931 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.159077 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.159173 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.159435 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc5jb\" (UniqueName: \"kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.159494 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.174853 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.674832859 +0000 UTC m=+155.565528292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.233692 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.264273 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.264578 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.764552174 +0000 UTC m=+155.655247607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.300721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.305317 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.313393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc5jb\" (UniqueName: \"kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb\") pod \"community-operators-fbmtl\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.320884 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.341480 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.365581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.365906 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.865893533 +0000 UTC m=+155.756588966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.463682 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:02 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:02 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:02 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.463750 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.466267 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.467256 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:02.967239781 +0000 UTC m=+155.857935214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.567952 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.568366 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.068349999 +0000 UTC m=+155.959045432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.608056 4687 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.641212 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.669357 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.669779 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.16975834 +0000 UTC m=+156.060453783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.771654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.772094 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.272075613 +0000 UTC m=+156.162771096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.872296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.872428 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.372408725 +0000 UTC m=+156.263104158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.872567 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.872822 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.372814804 +0000 UTC m=+156.263510237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.977829 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.977996 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.477978334 +0000 UTC m=+156.368673767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:02 crc kubenswrapper[4687]: I1203 17:42:02.978020 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:02 crc kubenswrapper[4687]: E1203 17:42:02.978343 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.47833586 +0000 UTC m=+156.369031293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.078963 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.079140 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.579097822 +0000 UTC m=+156.469793245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.079233 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.079495 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.579483129 +0000 UTC m=+156.470178562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.081896 4687 generic.go:334] "Generic (PLEG): container finished" podID="a63bf54d-d493-4719-9279-57810413d447" containerID="122a0c33d6000873bcf3568f96c14f503ba95334f06e8294115b1cc48f12e5a3" exitCode=0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.081946 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerDied","Data":"122a0c33d6000873bcf3568f96c14f503ba95334f06e8294115b1cc48f12e5a3"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.081969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerStarted","Data":"a125c5248d51900405ae01312590ca8612bb044d053aebe81b4e8898300eee25"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.083080 4687 generic.go:334] "Generic (PLEG): container finished" podID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerID="74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e" exitCode=0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.083141 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerDied","Data":"74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.083156 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerStarted","Data":"3cff370a485ce7be3b1d4a43a4b2ccfe94c8d63851fd7598c819fa144ef7f76d"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.084048 4687 generic.go:334] "Generic (PLEG): container finished" podID="73547923-4959-473f-b335-f1bccb070d16" containerID="f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2" exitCode=0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.084081 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerDied","Data":"f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.084098 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerStarted","Data":"1d666dce23c36fce9e7365ab71b17a51d46fa6e15954d80a3ca89d1b6a77289b"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.084890 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.086617 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" event={"ID":"4ed99dad-799d-4601-b839-67fa75f22951","Type":"ContainerStarted","Data":"ed300c7122a0771a861048a49ded8336b5a82c2f513ab72cb714bb724f7f12e2"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.086648 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" event={"ID":"4ed99dad-799d-4601-b839-67fa75f22951","Type":"ContainerStarted","Data":"f670aea77bdbb00cf9a08013bbee7db252d0e7dff7f4fe30cfd7d58ccd11c8ca"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.099538 4687 generic.go:334] "Generic (PLEG): container finished" podID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerID="705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95" exitCode=0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.100467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerDied","Data":"705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.100501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerStarted","Data":"e2ac83f524f73821f0b6962a53cab6b8fe149ae9725e041f513ecf9972904896"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.108154 4687 generic.go:334] "Generic (PLEG): container finished" podID="15c2c1d3-31da-423e-8e09-8d11382908b5" containerID="73a9aa36634349ca1d5198141060e352391b5bf96283439536260eb00a6afb4a" exitCode=0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.108203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" event={"ID":"15c2c1d3-31da-423e-8e09-8d11382908b5","Type":"ContainerDied","Data":"73a9aa36634349ca1d5198141060e352391b5bf96283439536260eb00a6afb4a"} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.180761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.181168 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.681080109 +0000 UTC m=+156.571775542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.181334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.181940 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.681918166 +0000 UTC m=+156.572613599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.237288 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.238077 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.240451 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.241555 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.245617 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.282913 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.283052 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.783034245 +0000 UTC m=+156.673729678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.283561 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.283687 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.283819 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.284106 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.784097742 +0000 UTC m=+156.674793175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.359166 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.360096 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.361818 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.370509 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.384560 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.384719 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.884672116 +0000 UTC m=+156.775367549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.384845 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.384918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.384947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.385027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.385698 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.885677442 +0000 UTC m=+156.776372875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.406341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.464089 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:03 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:03 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:03 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.464169 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.485441 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.485664 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.985641537 +0000 UTC m=+156.876336970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.485766 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.485862 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskgw\" (UniqueName: \"kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.485988 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.486658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: E1203 17:42:03.486989 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-03 17:42:03.986972087 +0000 UTC m=+156.877667520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-gg6bm" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.555696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.566399 4687 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-03T17:42:02.608088277Z","Handler":null,"Name":""} Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.572869 4687 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.572962 4687 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.588832 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.589977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.590012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskgw\" (UniqueName: \"kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.590103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.590561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.592168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.597181 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.607850 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskgw\" (UniqueName: \"kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw\") pod \"redhat-marketplace-clffd\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.674955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.701842 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.727567 4687 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.727620 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.742334 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.743348 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.753933 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-gg6bm\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.763389 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.764430 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.765409 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.770724 4687 patch_prober.go:28] interesting pod/apiserver-76f77b778f-gd77z container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]log ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]etcd ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/max-in-flight-filter ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 17:42:03 crc kubenswrapper[4687]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 17:42:03 crc kubenswrapper[4687]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-startinformers ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 17:42:03 crc kubenswrapper[4687]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 17:42:03 crc kubenswrapper[4687]: livez check failed Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.770790 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" podUID="14170176-819b-413a-ae4b-8b62d7b606ba" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.785069 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.798441 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.803190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.803259 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9948m\" (UniqueName: \"kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.803294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.856559 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.856620 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.872605 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.906308 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.906402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9948m\" (UniqueName: \"kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.906432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.908068 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.908397 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.934832 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9948m\" (UniqueName: \"kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m\") pod \"redhat-marketplace-4hw49\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:03 crc kubenswrapper[4687]: I1203 17:42:03.944073 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:42:03 crc kubenswrapper[4687]: W1203 17:42:03.975951 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad1c1379_bfc3_4496_989d_e24243316f45.slice/crio-14a4709d69f0046955cdf1c269da0ad08226a4259d52c540cdb33ec7cdf0a3c1 WatchSource:0}: Error finding container 14a4709d69f0046955cdf1c269da0ad08226a4259d52c540cdb33ec7cdf0a3c1: Status 404 returned error can't find the container with id 14a4709d69f0046955cdf1c269da0ad08226a4259d52c540cdb33ec7cdf0a3c1 Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.088669 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.106382 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.145083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerStarted","Data":"14a4709d69f0046955cdf1c269da0ad08226a4259d52c540cdb33ec7cdf0a3c1"} Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.150644 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" event={"ID":"4ed99dad-799d-4601-b839-67fa75f22951","Type":"ContainerStarted","Data":"e65468eb88b3be80baa8d6ffba3d51970dde6106c6a8564c5b0c8793412d1122"} Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.168837 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9b36483-a661-4160-be4e-d6331d142db7","Type":"ContainerStarted","Data":"14a8ec86e0dd056449348d1d14135246f3d5d85d50600ff59d548dd3ee56fe24"} Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.172990 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kv9nd" podStartSLOduration=13.172971663 podStartE2EDuration="13.172971663s" podCreationTimestamp="2025-12-03 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:04.167404212 +0000 UTC m=+157.058099655" watchObservedRunningTime="2025-12-03 17:42:04.172971663 +0000 UTC m=+157.063667106" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.180830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h58rq" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.466606 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:04 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:04 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:04 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.466952 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.524085 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrxg4 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.524194 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-zrxg4" podUID="2fa7fe3b-4230-4cbe-a1f5-461458f1d95d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.524268 4687 patch_prober.go:28] interesting pod/downloads-7954f5f757-zrxg4 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.524356 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-zrxg4" podUID="2fa7fe3b-4230-4cbe-a1f5-461458f1d95d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.545908 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.583098 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.586448 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.589864 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.633288 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.736399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.737149 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsnb\" (UniqueName: \"kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.737281 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.784449 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.837800 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l9zv\" (UniqueName: \"kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv\") pod \"15c2c1d3-31da-423e-8e09-8d11382908b5\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.838200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume\") pod \"15c2c1d3-31da-423e-8e09-8d11382908b5\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.838253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume\") pod \"15c2c1d3-31da-423e-8e09-8d11382908b5\" (UID: \"15c2c1d3-31da-423e-8e09-8d11382908b5\") " Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.838497 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.838570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsnb\" (UniqueName: \"kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.838595 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.839219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.839242 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.839526 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "15c2c1d3-31da-423e-8e09-8d11382908b5" (UID: "15c2c1d3-31da-423e-8e09-8d11382908b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.850952 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15c2c1d3-31da-423e-8e09-8d11382908b5" (UID: "15c2c1d3-31da-423e-8e09-8d11382908b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.851311 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv" (OuterVolumeSpecName: "kube-api-access-4l9zv") pod "15c2c1d3-31da-423e-8e09-8d11382908b5" (UID: "15c2c1d3-31da-423e-8e09-8d11382908b5"). InnerVolumeSpecName "kube-api-access-4l9zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.854431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsnb\" (UniqueName: \"kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb\") pod \"redhat-operators-4j9rv\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.940089 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c2c1d3-31da-423e-8e09-8d11382908b5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.940150 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15c2c1d3-31da-423e-8e09-8d11382908b5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.940160 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l9zv\" (UniqueName: \"kubernetes.io/projected/15c2c1d3-31da-423e-8e09-8d11382908b5-kube-api-access-4l9zv\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.964720 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:42:04 crc kubenswrapper[4687]: E1203 17:42:04.964920 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2c1d3-31da-423e-8e09-8d11382908b5" containerName="collect-profiles" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.964930 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2c1d3-31da-423e-8e09-8d11382908b5" containerName="collect-profiles" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.965029 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2c1d3-31da-423e-8e09-8d11382908b5" containerName="collect-profiles" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.966244 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:04 crc kubenswrapper[4687]: I1203 17:42:04.970822 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.041178 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2ck\" (UniqueName: \"kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.041261 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.041329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.071802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.142739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.142836 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2ck\" (UniqueName: \"kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.142896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.143422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.143998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.160249 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2ck\" (UniqueName: \"kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck\") pod \"redhat-operators-xds4t\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.180998 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad1c1379-bfc3-4496-989d-e24243316f45" containerID="5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07" exitCode=0 Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.181083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerDied","Data":"5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.184139 4687 generic.go:334] "Generic (PLEG): container finished" podID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerID="08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649" exitCode=0 Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.184180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerDied","Data":"08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.184197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerStarted","Data":"11a3d9fc958f0de7f688795c561fa377b5d2c57e4d1afd9e30ed9346a8b442b3"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.189211 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9b36483-a661-4160-be4e-d6331d142db7","Type":"ContainerStarted","Data":"4e99cd046912131037f19155f007b0212014d41e1a26aa2e888a74b4493aee3e"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.211448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" event={"ID":"15c2c1d3-31da-423e-8e09-8d11382908b5","Type":"ContainerDied","Data":"eefa490d39c2080b2f4c8e20c26346ce5e09dba6aff31f45ed11db73a005505f"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.211489 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eefa490d39c2080b2f4c8e20c26346ce5e09dba6aff31f45ed11db73a005505f" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.212248 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.217502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" event={"ID":"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6","Type":"ContainerStarted","Data":"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.217557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" event={"ID":"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6","Type":"ContainerStarted","Data":"3459d02a369846552269161396f67d71675344f9a5c847d6fc634b44d2ba4a80"} Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.232838 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.232815444 podStartE2EDuration="2.232815444s" podCreationTimestamp="2025-12-03 17:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:05.220225257 +0000 UTC m=+158.110920690" watchObservedRunningTime="2025-12-03 17:42:05.232815444 +0000 UTC m=+158.123510877" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.253383 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.254472 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.255776 4687 patch_prober.go:28] interesting pod/console-f9d7485db-mkvps container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.255899 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mkvps" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.267120 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" podStartSLOduration=137.267102056 podStartE2EDuration="2m17.267102056s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:42:05.266365953 +0000 UTC m=+158.157061406" watchObservedRunningTime="2025-12-03 17:42:05.267102056 +0000 UTC m=+158.157797489" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.293512 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.394093 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.415865 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.462322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.467577 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:05 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:05 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:05 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.467659 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:05 crc kubenswrapper[4687]: I1203 17:42:05.682641 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:42:05 crc kubenswrapper[4687]: W1203 17:42:05.702224 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a5839ff_9780_447d_b8a3_ea007b4d2a9d.slice/crio-7a20f0e9490867b57eaff2ac95ac3420139e603943f44bbbfb9a544186532b7c WatchSource:0}: Error finding container 7a20f0e9490867b57eaff2ac95ac3420139e603943f44bbbfb9a544186532b7c: Status 404 returned error can't find the container with id 7a20f0e9490867b57eaff2ac95ac3420139e603943f44bbbfb9a544186532b7c Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.224087 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerStarted","Data":"7a20f0e9490867b57eaff2ac95ac3420139e603943f44bbbfb9a544186532b7c"} Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.227490 4687 generic.go:334] "Generic (PLEG): container finished" podID="b9b36483-a661-4160-be4e-d6331d142db7" containerID="4e99cd046912131037f19155f007b0212014d41e1a26aa2e888a74b4493aee3e" exitCode=0 Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.227590 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9b36483-a661-4160-be4e-d6331d142db7","Type":"ContainerDied","Data":"4e99cd046912131037f19155f007b0212014d41e1a26aa2e888a74b4493aee3e"} Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.234140 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerStarted","Data":"8f6da813a9bf253667aeae043b9741f0e248fad5d3b7702f75138a63ca1baf7f"} Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.234307 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.463530 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:06 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:06 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:06 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:06 crc kubenswrapper[4687]: I1203 17:42:06.463619 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.241229 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerID="bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1" exitCode=0 Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.241321 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerDied","Data":"bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1"} Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.463288 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:07 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:07 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:07 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.463335 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.542758 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.606764 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir\") pod \"b9b36483-a661-4160-be4e-d6331d142db7\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.606843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access\") pod \"b9b36483-a661-4160-be4e-d6331d142db7\" (UID: \"b9b36483-a661-4160-be4e-d6331d142db7\") " Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.606893 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b9b36483-a661-4160-be4e-d6331d142db7" (UID: "b9b36483-a661-4160-be4e-d6331d142db7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.607183 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b9b36483-a661-4160-be4e-d6331d142db7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.617024 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b9b36483-a661-4160-be4e-d6331d142db7" (UID: "b9b36483-a661-4160-be4e-d6331d142db7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:42:07 crc kubenswrapper[4687]: I1203 17:42:07.708206 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b9b36483-a661-4160-be4e-d6331d142db7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.231873 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 17:42:08 crc kubenswrapper[4687]: E1203 17:42:08.232562 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b36483-a661-4160-be4e-d6331d142db7" containerName="pruner" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.232578 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b36483-a661-4160-be4e-d6331d142db7" containerName="pruner" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.232698 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b36483-a661-4160-be4e-d6331d142db7" containerName="pruner" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.233240 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.233377 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.236737 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.237569 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.257688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b9b36483-a661-4160-be4e-d6331d142db7","Type":"ContainerDied","Data":"14a8ec86e0dd056449348d1d14135246f3d5d85d50600ff59d548dd3ee56fe24"} Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.257722 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a8ec86e0dd056449348d1d14135246f3d5d85d50600ff59d548dd3ee56fe24" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.257785 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.260001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerStarted","Data":"a61f5221c1516f0f6d059a997cdc9d575af653bba3d69248667e6afeaa350415"} Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.315641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.315916 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.417835 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.417890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.418333 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.439721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.463826 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:08 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:08 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:08 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.463891 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.553033 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.740257 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.744007 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:42:08 crc kubenswrapper[4687]: W1203 17:42:08.757378 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd95218b8_2db2_45cf_ba57_b223ea039801.slice/crio-621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109 WatchSource:0}: Error finding container 621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109: Status 404 returned error can't find the container with id 621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109 Dec 03 17:42:08 crc kubenswrapper[4687]: I1203 17:42:08.762736 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gd77z" Dec 03 17:42:09 crc kubenswrapper[4687]: I1203 17:42:09.267293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95218b8-2db2-45cf-ba57-b223ea039801","Type":"ContainerStarted","Data":"621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109"} Dec 03 17:42:09 crc kubenswrapper[4687]: I1203 17:42:09.271143 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerID="a61f5221c1516f0f6d059a997cdc9d575af653bba3d69248667e6afeaa350415" exitCode=0 Dec 03 17:42:09 crc kubenswrapper[4687]: I1203 17:42:09.271221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerDied","Data":"a61f5221c1516f0f6d059a997cdc9d575af653bba3d69248667e6afeaa350415"} Dec 03 17:42:09 crc kubenswrapper[4687]: I1203 17:42:09.465001 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:09 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:09 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:09 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:09 crc kubenswrapper[4687]: I1203 17:42:09.465378 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.278677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95218b8-2db2-45cf-ba57-b223ea039801","Type":"ContainerStarted","Data":"2cc9caca92ceb5777398177e9d7ea68a1f369b99ed86018797b1f00637e09f09"} Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.462451 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:10 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:10 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:10 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.462531 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.564277 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pc4n2" Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.956393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:42:10 crc kubenswrapper[4687]: I1203 17:42:10.962198 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c067216-97d2-43a1-a8a6-5719153b3c61-metrics-certs\") pod \"network-metrics-daemon-w8876\" (UID: \"2c067216-97d2-43a1-a8a6-5719153b3c61\") " pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.022030 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-w8876" Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.227987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-w8876"] Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.286623 4687 generic.go:334] "Generic (PLEG): container finished" podID="d95218b8-2db2-45cf-ba57-b223ea039801" containerID="2cc9caca92ceb5777398177e9d7ea68a1f369b99ed86018797b1f00637e09f09" exitCode=0 Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.286714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95218b8-2db2-45cf-ba57-b223ea039801","Type":"ContainerDied","Data":"2cc9caca92ceb5777398177e9d7ea68a1f369b99ed86018797b1f00637e09f09"} Dec 03 17:42:11 crc kubenswrapper[4687]: W1203 17:42:11.304958 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c067216_97d2_43a1_a8a6_5719153b3c61.slice/crio-f9bc70b392bc9448f6596dbefb8179d37148501905f0f8731b00ee91831aa95f WatchSource:0}: Error finding container f9bc70b392bc9448f6596dbefb8179d37148501905f0f8731b00ee91831aa95f: Status 404 returned error can't find the container with id f9bc70b392bc9448f6596dbefb8179d37148501905f0f8731b00ee91831aa95f Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.464692 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:11 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:11 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:11 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:11 crc kubenswrapper[4687]: I1203 17:42:11.464774 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:12 crc kubenswrapper[4687]: I1203 17:42:12.293294 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w8876" event={"ID":"2c067216-97d2-43a1-a8a6-5719153b3c61","Type":"ContainerStarted","Data":"0225d1040de15ddd8c1a921980c89d5a8b50e1a232551c7df7e32c9a0c379622"} Dec 03 17:42:12 crc kubenswrapper[4687]: I1203 17:42:12.293348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w8876" event={"ID":"2c067216-97d2-43a1-a8a6-5719153b3c61","Type":"ContainerStarted","Data":"f9bc70b392bc9448f6596dbefb8179d37148501905f0f8731b00ee91831aa95f"} Dec 03 17:42:12 crc kubenswrapper[4687]: I1203 17:42:12.463860 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:12 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:12 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:12 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:12 crc kubenswrapper[4687]: I1203 17:42:12.464195 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:13 crc kubenswrapper[4687]: I1203 17:42:13.463925 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:13 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:13 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:13 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:13 crc kubenswrapper[4687]: I1203 17:42:13.463999 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:14 crc kubenswrapper[4687]: I1203 17:42:14.112272 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:42:14 crc kubenswrapper[4687]: I1203 17:42:14.112369 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:42:14 crc kubenswrapper[4687]: I1203 17:42:14.464556 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:14 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:14 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:14 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:14 crc kubenswrapper[4687]: I1203 17:42:14.464649 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:14 crc kubenswrapper[4687]: I1203 17:42:14.535642 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-zrxg4" Dec 03 17:42:15 crc kubenswrapper[4687]: I1203 17:42:15.254026 4687 patch_prober.go:28] interesting pod/console-f9d7485db-mkvps container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 03 17:42:15 crc kubenswrapper[4687]: I1203 17:42:15.254428 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mkvps" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 03 17:42:15 crc kubenswrapper[4687]: I1203 17:42:15.462838 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:15 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:15 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:15 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:15 crc kubenswrapper[4687]: I1203 17:42:15.462902 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.299011 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.326587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d95218b8-2db2-45cf-ba57-b223ea039801","Type":"ContainerDied","Data":"621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109"} Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.326632 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="621233c6bf030b0c8d71d88559e83680790e26aa9fb018ca87641ca5edfc6109" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.326639 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.431291 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir\") pod \"d95218b8-2db2-45cf-ba57-b223ea039801\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.431382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d95218b8-2db2-45cf-ba57-b223ea039801" (UID: "d95218b8-2db2-45cf-ba57-b223ea039801"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.431411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access\") pod \"d95218b8-2db2-45cf-ba57-b223ea039801\" (UID: \"d95218b8-2db2-45cf-ba57-b223ea039801\") " Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.431647 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d95218b8-2db2-45cf-ba57-b223ea039801-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.450979 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d95218b8-2db2-45cf-ba57-b223ea039801" (UID: "d95218b8-2db2-45cf-ba57-b223ea039801"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.463823 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:16 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:16 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:16 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.463902 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:16 crc kubenswrapper[4687]: I1203 17:42:16.533088 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d95218b8-2db2-45cf-ba57-b223ea039801-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:42:17 crc kubenswrapper[4687]: I1203 17:42:17.462041 4687 patch_prober.go:28] interesting pod/router-default-5444994796-4bjp6 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 17:42:17 crc kubenswrapper[4687]: [-]has-synced failed: reason withheld Dec 03 17:42:17 crc kubenswrapper[4687]: [+]process-running ok Dec 03 17:42:17 crc kubenswrapper[4687]: healthz check failed Dec 03 17:42:17 crc kubenswrapper[4687]: I1203 17:42:17.462108 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-4bjp6" podUID="d5d70eb6-6676-49c2-8853-55084c991036" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 17:42:18 crc kubenswrapper[4687]: I1203 17:42:18.462707 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:42:18 crc kubenswrapper[4687]: I1203 17:42:18.469050 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-4bjp6" Dec 03 17:42:23 crc kubenswrapper[4687]: I1203 17:42:23.771483 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:42:25 crc kubenswrapper[4687]: I1203 17:42:25.261913 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:42:25 crc kubenswrapper[4687]: I1203 17:42:25.265750 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:42:34 crc kubenswrapper[4687]: I1203 17:42:34.192818 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 03 17:42:35 crc kubenswrapper[4687]: I1203 17:42:35.498364 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bn658" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.210182 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.210702 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnssj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f4dqh_openshift-marketplace(5362cb96-c834-44db-8cbe-ff42609ebe76): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.211910 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f4dqh" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.212895 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.213070 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brg89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9rknl_openshift-marketplace(73547923-4959-473f-b335-f1bccb070d16): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.214555 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9rknl" podUID="73547923-4959-473f-b335-f1bccb070d16" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.234589 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 17:42:40 crc kubenswrapper[4687]: E1203 17:42:40.235017 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95218b8-2db2-45cf-ba57-b223ea039801" containerName="pruner" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.235060 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95218b8-2db2-45cf-ba57-b223ea039801" containerName="pruner" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.235492 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95218b8-2db2-45cf-ba57-b223ea039801" containerName="pruner" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.236395 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.245686 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.248901 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.249368 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.262551 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.262599 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.363802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.364235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.364348 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.386852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:40 crc kubenswrapper[4687]: I1203 17:42:40.571077 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:42:44 crc kubenswrapper[4687]: I1203 17:42:44.111734 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:42:44 crc kubenswrapper[4687]: I1203 17:42:44.111827 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.425958 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.426951 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.433645 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.433716 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.433758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.437650 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.535026 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.535147 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.535190 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.535213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.535312 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.558074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access\") pod \"installer-9-crc\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:45 crc kubenswrapper[4687]: I1203 17:42:45.747736 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:42:53 crc kubenswrapper[4687]: E1203 17:42:53.824990 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 17:42:53 crc kubenswrapper[4687]: E1203 17:42:53.825719 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hc5jb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fbmtl_openshift-marketplace(a63bf54d-d493-4719-9279-57810413d447): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:42:53 crc kubenswrapper[4687]: E1203 17:42:53.827050 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-fbmtl" podUID="a63bf54d-d493-4719-9279-57810413d447" Dec 03 17:43:01 crc kubenswrapper[4687]: E1203 17:43:01.778582 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 03 17:43:01 crc kubenswrapper[4687]: E1203 17:43:01.779095 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6kfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-d59r5_openshift-marketplace(4148743d-b671-48a0-b1f0-ad5a3b73a93a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:43:01 crc kubenswrapper[4687]: E1203 17:43:01.780334 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-d59r5" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" Dec 03 17:43:02 crc kubenswrapper[4687]: E1203 17:43:02.725964 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fbmtl" podUID="a63bf54d-d493-4719-9279-57810413d447" Dec 03 17:43:02 crc kubenswrapper[4687]: E1203 17:43:02.725967 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-d59r5" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" Dec 03 17:43:02 crc kubenswrapper[4687]: E1203 17:43:02.778846 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 17:43:02 crc kubenswrapper[4687]: E1203 17:43:02.779023 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln2ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xds4t_openshift-marketplace(8a5839ff-9780-447d-b8a3-ea007b4d2a9d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:43:02 crc kubenswrapper[4687]: E1203 17:43:02.780167 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xds4t" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.155637 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.156093 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nzsnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4j9rv_openshift-marketplace(0a8f332f-4fac-4824-90b9-a922f0bb35c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.157344 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4j9rv" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.189420 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.189620 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xskgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-clffd_openshift-marketplace(ad1c1379-bfc3-4496-989d-e24243316f45): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.190860 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-clffd" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.301659 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.302206 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9948m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4hw49_openshift-marketplace(9aa3a99c-454e-48aa-9a98-703a5c422d74): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.303641 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4hw49" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.631662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerStarted","Data":"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9"} Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.639343 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.639415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerStarted","Data":"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284"} Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.643064 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-w8876" event={"ID":"2c067216-97d2-43a1-a8a6-5719153b3c61","Type":"ContainerStarted","Data":"8700bed0b66497cac61962234f43f03ad43accd975ee06f173be07b533233933"} Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.643859 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4hw49" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" Dec 03 17:43:09 crc kubenswrapper[4687]: E1203 17:43:09.644532 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-clffd" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.707357 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 03 17:43:09 crc kubenswrapper[4687]: I1203 17:43:09.724503 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-w8876" podStartSLOduration=201.72446857 podStartE2EDuration="3m21.72446857s" podCreationTimestamp="2025-12-03 17:39:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:43:09.72076789 +0000 UTC m=+222.611463323" watchObservedRunningTime="2025-12-03 17:43:09.72446857 +0000 UTC m=+222.615164003" Dec 03 17:43:09 crc kubenswrapper[4687]: W1203 17:43:09.729006 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod499fb078_750b_4623_a979_d6935e5353c8.slice/crio-f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78 WatchSource:0}: Error finding container f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78: Status 404 returned error can't find the container with id f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78 Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.649748 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"939c3133-15b9-4300-bd04-10adde0a7bd1","Type":"ContainerStarted","Data":"886181234fe6aaefb4965f8b79e5c722c0557ea3809b9c5f543e15b4045c5c83"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.650274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"939c3133-15b9-4300-bd04-10adde0a7bd1","Type":"ContainerStarted","Data":"068abd3d94a1e494427a3ac409f6d745ee4a3cccb0b2188ee85f8107a283ddbe"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.654103 4687 generic.go:334] "Generic (PLEG): container finished" podID="73547923-4959-473f-b335-f1bccb070d16" containerID="bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9" exitCode=0 Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.654320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerDied","Data":"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.659682 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"499fb078-750b-4623-a979-d6935e5353c8","Type":"ContainerStarted","Data":"1bf7609808a2a47ee70b557bc3d01ab18d7ee234e02d640e3bec0b1f61498ab3"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.659755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"499fb078-750b-4623-a979-d6935e5353c8","Type":"ContainerStarted","Data":"f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.667821 4687 generic.go:334] "Generic (PLEG): container finished" podID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerID="3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284" exitCode=0 Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.668159 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerDied","Data":"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284"} Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.693832 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=30.693814735 podStartE2EDuration="30.693814735s" podCreationTimestamp="2025-12-03 17:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:43:10.665637305 +0000 UTC m=+223.556332728" watchObservedRunningTime="2025-12-03 17:43:10.693814735 +0000 UTC m=+223.584510168" Dec 03 17:43:10 crc kubenswrapper[4687]: I1203 17:43:10.709533 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=25.709509012 podStartE2EDuration="25.709509012s" podCreationTimestamp="2025-12-03 17:42:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:43:10.705300296 +0000 UTC m=+223.595995729" watchObservedRunningTime="2025-12-03 17:43:10.709509012 +0000 UTC m=+223.600204445" Dec 03 17:43:11 crc kubenswrapper[4687]: I1203 17:43:11.677866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerStarted","Data":"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b"} Dec 03 17:43:11 crc kubenswrapper[4687]: I1203 17:43:11.681719 4687 generic.go:334] "Generic (PLEG): container finished" podID="939c3133-15b9-4300-bd04-10adde0a7bd1" containerID="886181234fe6aaefb4965f8b79e5c722c0557ea3809b9c5f543e15b4045c5c83" exitCode=0 Dec 03 17:43:11 crc kubenswrapper[4687]: I1203 17:43:11.681826 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"939c3133-15b9-4300-bd04-10adde0a7bd1","Type":"ContainerDied","Data":"886181234fe6aaefb4965f8b79e5c722c0557ea3809b9c5f543e15b4045c5c83"} Dec 03 17:43:11 crc kubenswrapper[4687]: I1203 17:43:11.696102 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f4dqh" podStartSLOduration=2.6425868599999998 podStartE2EDuration="1m10.696083713s" podCreationTimestamp="2025-12-03 17:42:01 +0000 UTC" firstStartedPulling="2025-12-03 17:42:03.103212376 +0000 UTC m=+155.993907809" lastFinishedPulling="2025-12-03 17:43:11.156709229 +0000 UTC m=+224.047404662" observedRunningTime="2025-12-03 17:43:11.694318656 +0000 UTC m=+224.585014139" watchObservedRunningTime="2025-12-03 17:43:11.696083713 +0000 UTC m=+224.586779146" Dec 03 17:43:12 crc kubenswrapper[4687]: I1203 17:43:12.083622 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:12 crc kubenswrapper[4687]: I1203 17:43:12.083958 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:12 crc kubenswrapper[4687]: I1203 17:43:12.689341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerStarted","Data":"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f"} Dec 03 17:43:12 crc kubenswrapper[4687]: I1203 17:43:12.708383 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9rknl" podStartSLOduration=2.95265576 podStartE2EDuration="1m11.708350365s" podCreationTimestamp="2025-12-03 17:42:01 +0000 UTC" firstStartedPulling="2025-12-03 17:42:03.085185725 +0000 UTC m=+155.975881168" lastFinishedPulling="2025-12-03 17:43:11.84088034 +0000 UTC m=+224.731575773" observedRunningTime="2025-12-03 17:43:12.706918088 +0000 UTC m=+225.597613561" watchObservedRunningTime="2025-12-03 17:43:12.708350365 +0000 UTC m=+225.599045798" Dec 03 17:43:12 crc kubenswrapper[4687]: I1203 17:43:12.942248 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.051920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access\") pod \"939c3133-15b9-4300-bd04-10adde0a7bd1\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.051984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir\") pod \"939c3133-15b9-4300-bd04-10adde0a7bd1\" (UID: \"939c3133-15b9-4300-bd04-10adde0a7bd1\") " Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.052156 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "939c3133-15b9-4300-bd04-10adde0a7bd1" (UID: "939c3133-15b9-4300-bd04-10adde0a7bd1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.052358 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/939c3133-15b9-4300-bd04-10adde0a7bd1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.057508 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "939c3133-15b9-4300-bd04-10adde0a7bd1" (UID: "939c3133-15b9-4300-bd04-10adde0a7bd1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.146365 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-f4dqh" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="registry-server" probeResult="failure" output=< Dec 03 17:43:13 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 17:43:13 crc kubenswrapper[4687]: > Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.153722 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/939c3133-15b9-4300-bd04-10adde0a7bd1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.695147 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.695147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"939c3133-15b9-4300-bd04-10adde0a7bd1","Type":"ContainerDied","Data":"068abd3d94a1e494427a3ac409f6d745ee4a3cccb0b2188ee85f8107a283ddbe"} Dec 03 17:43:13 crc kubenswrapper[4687]: I1203 17:43:13.695459 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="068abd3d94a1e494427a3ac409f6d745ee4a3cccb0b2188ee85f8107a283ddbe" Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.111089 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.111461 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.111531 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.112315 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.112456 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398" gracePeriod=600 Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.706276 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398" exitCode=0 Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.706362 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398"} Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.709560 4687 generic.go:334] "Generic (PLEG): container finished" podID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerID="89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0" exitCode=0 Dec 03 17:43:14 crc kubenswrapper[4687]: I1203 17:43:14.709598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerDied","Data":"89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0"} Dec 03 17:43:15 crc kubenswrapper[4687]: I1203 17:43:15.717866 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6"} Dec 03 17:43:15 crc kubenswrapper[4687]: I1203 17:43:15.720942 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerStarted","Data":"a95e0da25bbd894c714fb42f04504bd37978d6487dbfa9fd92938bf359af7aad"} Dec 03 17:43:15 crc kubenswrapper[4687]: I1203 17:43:15.723710 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerStarted","Data":"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b"} Dec 03 17:43:15 crc kubenswrapper[4687]: I1203 17:43:15.777663 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d59r5" podStartSLOduration=2.484706024 podStartE2EDuration="1m14.777633768s" podCreationTimestamp="2025-12-03 17:42:01 +0000 UTC" firstStartedPulling="2025-12-03 17:42:03.084507715 +0000 UTC m=+155.975203148" lastFinishedPulling="2025-12-03 17:43:15.377435439 +0000 UTC m=+228.268130892" observedRunningTime="2025-12-03 17:43:15.774648462 +0000 UTC m=+228.665343895" watchObservedRunningTime="2025-12-03 17:43:15.777633768 +0000 UTC m=+228.668329211" Dec 03 17:43:16 crc kubenswrapper[4687]: I1203 17:43:16.737092 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerID="a95e0da25bbd894c714fb42f04504bd37978d6487dbfa9fd92938bf359af7aad" exitCode=0 Dec 03 17:43:16 crc kubenswrapper[4687]: I1203 17:43:16.737185 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerDied","Data":"a95e0da25bbd894c714fb42f04504bd37978d6487dbfa9fd92938bf359af7aad"} Dec 03 17:43:17 crc kubenswrapper[4687]: I1203 17:43:17.745741 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerStarted","Data":"8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720"} Dec 03 17:43:17 crc kubenswrapper[4687]: I1203 17:43:17.768029 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xds4t" podStartSLOduration=6.615808941 podStartE2EDuration="1m13.768006507s" podCreationTimestamp="2025-12-03 17:42:04 +0000 UTC" firstStartedPulling="2025-12-03 17:42:10.28177673 +0000 UTC m=+163.172472163" lastFinishedPulling="2025-12-03 17:43:17.433974256 +0000 UTC m=+230.324669729" observedRunningTime="2025-12-03 17:43:17.767558772 +0000 UTC m=+230.658254205" watchObservedRunningTime="2025-12-03 17:43:17.768006507 +0000 UTC m=+230.658701940" Dec 03 17:43:19 crc kubenswrapper[4687]: I1203 17:43:19.759528 4687 generic.go:334] "Generic (PLEG): container finished" podID="a63bf54d-d493-4719-9279-57810413d447" containerID="b602ab837670e4f3a6131e7b043b118242020c491c69d340126e9d0420c94e95" exitCode=0 Dec 03 17:43:19 crc kubenswrapper[4687]: I1203 17:43:19.759588 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerDied","Data":"b602ab837670e4f3a6131e7b043b118242020c491c69d340126e9d0420c94e95"} Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.690223 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.690490 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.734458 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.817190 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.916801 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.918173 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:43:21 crc kubenswrapper[4687]: I1203 17:43:21.951919 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:43:22 crc kubenswrapper[4687]: I1203 17:43:22.120068 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:22 crc kubenswrapper[4687]: I1203 17:43:22.162619 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:22 crc kubenswrapper[4687]: I1203 17:43:22.816821 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:43:23 crc kubenswrapper[4687]: I1203 17:43:23.678530 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:43:23 crc kubenswrapper[4687]: I1203 17:43:23.791457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerStarted","Data":"4b020e722146ec78f997523b1c53cfdc6782267524b068bb13fc65d756787504"} Dec 03 17:43:23 crc kubenswrapper[4687]: I1203 17:43:23.791720 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f4dqh" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="registry-server" containerID="cri-o://aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b" gracePeriod=2 Dec 03 17:43:23 crc kubenswrapper[4687]: I1203 17:43:23.825063 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fbmtl" podStartSLOduration=2.986948439 podStartE2EDuration="1m22.82504296s" podCreationTimestamp="2025-12-03 17:42:01 +0000 UTC" firstStartedPulling="2025-12-03 17:42:03.084552227 +0000 UTC m=+155.975247660" lastFinishedPulling="2025-12-03 17:43:22.922646738 +0000 UTC m=+235.813342181" observedRunningTime="2025-12-03 17:43:23.819538032 +0000 UTC m=+236.710233485" watchObservedRunningTime="2025-12-03 17:43:23.82504296 +0000 UTC m=+236.715738393" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.161798 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.315729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content\") pod \"5362cb96-c834-44db-8cbe-ff42609ebe76\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.315813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnssj\" (UniqueName: \"kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj\") pod \"5362cb96-c834-44db-8cbe-ff42609ebe76\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.315841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities\") pod \"5362cb96-c834-44db-8cbe-ff42609ebe76\" (UID: \"5362cb96-c834-44db-8cbe-ff42609ebe76\") " Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.316801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities" (OuterVolumeSpecName: "utilities") pod "5362cb96-c834-44db-8cbe-ff42609ebe76" (UID: "5362cb96-c834-44db-8cbe-ff42609ebe76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.324911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj" (OuterVolumeSpecName: "kube-api-access-hnssj") pod "5362cb96-c834-44db-8cbe-ff42609ebe76" (UID: "5362cb96-c834-44db-8cbe-ff42609ebe76"). InnerVolumeSpecName "kube-api-access-hnssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.380802 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5362cb96-c834-44db-8cbe-ff42609ebe76" (UID: "5362cb96-c834-44db-8cbe-ff42609ebe76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.417770 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.417808 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnssj\" (UniqueName: \"kubernetes.io/projected/5362cb96-c834-44db-8cbe-ff42609ebe76-kube-api-access-hnssj\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.417826 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5362cb96-c834-44db-8cbe-ff42609ebe76-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.810084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerStarted","Data":"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9"} Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.818793 4687 generic.go:334] "Generic (PLEG): container finished" podID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerID="aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b" exitCode=0 Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.818872 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerDied","Data":"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b"} Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.818907 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f4dqh" event={"ID":"5362cb96-c834-44db-8cbe-ff42609ebe76","Type":"ContainerDied","Data":"e2ac83f524f73821f0b6962a53cab6b8fe149ae9725e041f513ecf9972904896"} Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.818928 4687 scope.go:117] "RemoveContainer" containerID="aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.819103 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f4dqh" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.825575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerStarted","Data":"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742"} Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.835022 4687 generic.go:334] "Generic (PLEG): container finished" podID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerID="c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237" exitCode=0 Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.835169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerDied","Data":"c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237"} Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.845915 4687 scope.go:117] "RemoveContainer" containerID="3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284" Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.874737 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.877918 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f4dqh"] Dec 03 17:43:24 crc kubenswrapper[4687]: I1203 17:43:24.883651 4687 scope.go:117] "RemoveContainer" containerID="705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.007097 4687 scope.go:117] "RemoveContainer" containerID="aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b" Dec 03 17:43:25 crc kubenswrapper[4687]: E1203 17:43:25.008244 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b\": container with ID starting with aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b not found: ID does not exist" containerID="aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.008278 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b"} err="failed to get container status \"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b\": rpc error: code = NotFound desc = could not find container \"aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b\": container with ID starting with aa92bf1ce7dc01e59c3024d74d195b0c59e331417e36a46e2e968e6fba90570b not found: ID does not exist" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.008301 4687 scope.go:117] "RemoveContainer" containerID="3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284" Dec 03 17:43:25 crc kubenswrapper[4687]: E1203 17:43:25.008562 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284\": container with ID starting with 3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284 not found: ID does not exist" containerID="3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.008587 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284"} err="failed to get container status \"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284\": rpc error: code = NotFound desc = could not find container \"3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284\": container with ID starting with 3e375e285db85421f4d6df2f26a0db76bf513b0a731d6a91f180068071fce284 not found: ID does not exist" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.008600 4687 scope.go:117] "RemoveContainer" containerID="705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95" Dec 03 17:43:25 crc kubenswrapper[4687]: E1203 17:43:25.008877 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95\": container with ID starting with 705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95 not found: ID does not exist" containerID="705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.008903 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95"} err="failed to get container status \"705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95\": rpc error: code = NotFound desc = could not find container \"705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95\": container with ID starting with 705a811c3e52ea2150f066f69161efbed0301f1ad336aa9fc97f2dc6370a8f95 not found: ID does not exist" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.294820 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.295270 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.342172 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.413615 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" path="/var/lib/kubelet/pods/5362cb96-c834-44db-8cbe-ff42609ebe76/volumes" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.848179 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerID="84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9" exitCode=0 Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.848230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerDied","Data":"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9"} Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.856837 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad1c1379-bfc3-4496-989d-e24243316f45" containerID="5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742" exitCode=0 Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.856912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerDied","Data":"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742"} Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.859635 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerStarted","Data":"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f"} Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.901639 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4hw49" podStartSLOduration=2.764158068 podStartE2EDuration="1m22.901620324s" podCreationTimestamp="2025-12-03 17:42:03 +0000 UTC" firstStartedPulling="2025-12-03 17:42:05.18537417 +0000 UTC m=+158.076069603" lastFinishedPulling="2025-12-03 17:43:25.322836426 +0000 UTC m=+238.213531859" observedRunningTime="2025-12-03 17:43:25.898551635 +0000 UTC m=+238.789247088" watchObservedRunningTime="2025-12-03 17:43:25.901620324 +0000 UTC m=+238.792315757" Dec 03 17:43:25 crc kubenswrapper[4687]: I1203 17:43:25.909921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:26 crc kubenswrapper[4687]: I1203 17:43:26.866715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerStarted","Data":"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512"} Dec 03 17:43:26 crc kubenswrapper[4687]: I1203 17:43:26.868774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerStarted","Data":"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b"} Dec 03 17:43:26 crc kubenswrapper[4687]: I1203 17:43:26.883449 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-clffd" podStartSLOduration=2.5553679049999998 podStartE2EDuration="1m23.883432662s" podCreationTimestamp="2025-12-03 17:42:03 +0000 UTC" firstStartedPulling="2025-12-03 17:42:05.184036889 +0000 UTC m=+158.074732322" lastFinishedPulling="2025-12-03 17:43:26.512101656 +0000 UTC m=+239.402797079" observedRunningTime="2025-12-03 17:43:26.881849921 +0000 UTC m=+239.772545354" watchObservedRunningTime="2025-12-03 17:43:26.883432662 +0000 UTC m=+239.774128095" Dec 03 17:43:30 crc kubenswrapper[4687]: I1203 17:43:30.072869 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4j9rv" podStartSLOduration=8.726796624 podStartE2EDuration="1m26.072848887s" podCreationTimestamp="2025-12-03 17:42:04 +0000 UTC" firstStartedPulling="2025-12-03 17:42:09.273956949 +0000 UTC m=+162.164652382" lastFinishedPulling="2025-12-03 17:43:26.620009212 +0000 UTC m=+239.510704645" observedRunningTime="2025-12-03 17:43:26.90660217 +0000 UTC m=+239.797297613" watchObservedRunningTime="2025-12-03 17:43:30.072848887 +0000 UTC m=+242.963544320" Dec 03 17:43:30 crc kubenswrapper[4687]: I1203 17:43:30.076000 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:43:30 crc kubenswrapper[4687]: I1203 17:43:30.076272 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xds4t" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="registry-server" containerID="cri-o://8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" gracePeriod=2 Dec 03 17:43:32 crc kubenswrapper[4687]: I1203 17:43:32.343063 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:32 crc kubenswrapper[4687]: I1203 17:43:32.343516 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:32 crc kubenswrapper[4687]: I1203 17:43:32.405689 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:32 crc kubenswrapper[4687]: I1203 17:43:32.932157 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:33 crc kubenswrapper[4687]: I1203 17:43:33.474579 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:43:33 crc kubenswrapper[4687]: I1203 17:43:33.675067 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:43:33 crc kubenswrapper[4687]: I1203 17:43:33.675109 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:43:33 crc kubenswrapper[4687]: I1203 17:43:33.726615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:43:33 crc kubenswrapper[4687]: I1203 17:43:33.939625 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.106878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.107234 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.150409 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.907726 4687 generic.go:334] "Generic (PLEG): container finished" podID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerID="8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" exitCode=0 Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.907897 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerDied","Data":"8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720"} Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.908418 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fbmtl" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="registry-server" containerID="cri-o://4b020e722146ec78f997523b1c53cfdc6782267524b068bb13fc65d756787504" gracePeriod=2 Dec 03 17:43:34 crc kubenswrapper[4687]: I1203 17:43:34.961471 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:35 crc kubenswrapper[4687]: I1203 17:43:35.072244 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:43:35 crc kubenswrapper[4687]: I1203 17:43:35.072428 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:43:35 crc kubenswrapper[4687]: I1203 17:43:35.109178 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:43:35 crc kubenswrapper[4687]: E1203 17:43:35.294738 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720 is running failed: container process not found" containerID="8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 17:43:35 crc kubenswrapper[4687]: E1203 17:43:35.295435 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720 is running failed: container process not found" containerID="8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 17:43:35 crc kubenswrapper[4687]: E1203 17:43:35.295802 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720 is running failed: container process not found" containerID="8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 17:43:35 crc kubenswrapper[4687]: E1203 17:43:35.296006 4687 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-xds4t" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="registry-server" Dec 03 17:43:35 crc kubenswrapper[4687]: I1203 17:43:35.977573 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.077559 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.111168 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.164887 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content\") pod \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.164967 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities\") pod \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.165063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2ck\" (UniqueName: \"kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck\") pod \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\" (UID: \"8a5839ff-9780-447d-b8a3-ea007b4d2a9d\") " Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.166470 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities" (OuterVolumeSpecName: "utilities") pod "8a5839ff-9780-447d-b8a3-ea007b4d2a9d" (UID: "8a5839ff-9780-447d-b8a3-ea007b4d2a9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.170983 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck" (OuterVolumeSpecName: "kube-api-access-ln2ck") pod "8a5839ff-9780-447d-b8a3-ea007b4d2a9d" (UID: "8a5839ff-9780-447d-b8a3-ea007b4d2a9d"). InnerVolumeSpecName "kube-api-access-ln2ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.266331 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.266367 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2ck\" (UniqueName: \"kubernetes.io/projected/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-kube-api-access-ln2ck\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.283530 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a5839ff-9780-447d-b8a3-ea007b4d2a9d" (UID: "8a5839ff-9780-447d-b8a3-ea007b4d2a9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.367876 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a5839ff-9780-447d-b8a3-ea007b4d2a9d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.920792 4687 generic.go:334] "Generic (PLEG): container finished" podID="a63bf54d-d493-4719-9279-57810413d447" containerID="4b020e722146ec78f997523b1c53cfdc6782267524b068bb13fc65d756787504" exitCode=0 Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.920875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerDied","Data":"4b020e722146ec78f997523b1c53cfdc6782267524b068bb13fc65d756787504"} Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.923592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xds4t" event={"ID":"8a5839ff-9780-447d-b8a3-ea007b4d2a9d","Type":"ContainerDied","Data":"7a20f0e9490867b57eaff2ac95ac3420139e603943f44bbbfb9a544186532b7c"} Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.923633 4687 scope.go:117] "RemoveContainer" containerID="8161177cb30ad675eb964daf373c030f73139a909ef9bc4bdd50437ed849d720" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.923710 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xds4t" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.923785 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4hw49" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="registry-server" containerID="cri-o://b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f" gracePeriod=2 Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.937277 4687 scope.go:117] "RemoveContainer" containerID="a95e0da25bbd894c714fb42f04504bd37978d6487dbfa9fd92938bf359af7aad" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.957223 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.959004 4687 scope.go:117] "RemoveContainer" containerID="a61f5221c1516f0f6d059a997cdc9d575af653bba3d69248667e6afeaa350415" Dec 03 17:43:36 crc kubenswrapper[4687]: I1203 17:43:36.960925 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xds4t"] Dec 03 17:43:37 crc kubenswrapper[4687]: I1203 17:43:37.413983 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" path="/var/lib/kubelet/pods/8a5839ff-9780-447d-b8a3-ea007b4d2a9d/volumes" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.213454 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.294565 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc5jb\" (UniqueName: \"kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb\") pod \"a63bf54d-d493-4719-9279-57810413d447\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.294672 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities\") pod \"a63bf54d-d493-4719-9279-57810413d447\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.294718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content\") pod \"a63bf54d-d493-4719-9279-57810413d447\" (UID: \"a63bf54d-d493-4719-9279-57810413d447\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.295781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities" (OuterVolumeSpecName: "utilities") pod "a63bf54d-d493-4719-9279-57810413d447" (UID: "a63bf54d-d493-4719-9279-57810413d447"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.311320 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb" (OuterVolumeSpecName: "kube-api-access-hc5jb") pod "a63bf54d-d493-4719-9279-57810413d447" (UID: "a63bf54d-d493-4719-9279-57810413d447"). InnerVolumeSpecName "kube-api-access-hc5jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.358852 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a63bf54d-d493-4719-9279-57810413d447" (UID: "a63bf54d-d493-4719-9279-57810413d447"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.379440 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.396221 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.396255 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a63bf54d-d493-4719-9279-57810413d447-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.396268 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc5jb\" (UniqueName: \"kubernetes.io/projected/a63bf54d-d493-4719-9279-57810413d447-kube-api-access-hc5jb\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.496870 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9948m\" (UniqueName: \"kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m\") pod \"9aa3a99c-454e-48aa-9a98-703a5c422d74\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.497024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content\") pod \"9aa3a99c-454e-48aa-9a98-703a5c422d74\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.497056 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities\") pod \"9aa3a99c-454e-48aa-9a98-703a5c422d74\" (UID: \"9aa3a99c-454e-48aa-9a98-703a5c422d74\") " Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.499248 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities" (OuterVolumeSpecName: "utilities") pod "9aa3a99c-454e-48aa-9a98-703a5c422d74" (UID: "9aa3a99c-454e-48aa-9a98-703a5c422d74"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.504919 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m" (OuterVolumeSpecName: "kube-api-access-9948m") pod "9aa3a99c-454e-48aa-9a98-703a5c422d74" (UID: "9aa3a99c-454e-48aa-9a98-703a5c422d74"). InnerVolumeSpecName "kube-api-access-9948m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.518193 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aa3a99c-454e-48aa-9a98-703a5c422d74" (UID: "9aa3a99c-454e-48aa-9a98-703a5c422d74"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.598668 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.598748 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aa3a99c-454e-48aa-9a98-703a5c422d74-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.598762 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9948m\" (UniqueName: \"kubernetes.io/projected/9aa3a99c-454e-48aa-9a98-703a5c422d74-kube-api-access-9948m\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.950501 4687 generic.go:334] "Generic (PLEG): container finished" podID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerID="b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f" exitCode=0 Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.950541 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hw49" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.950614 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerDied","Data":"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f"} Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.950644 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hw49" event={"ID":"9aa3a99c-454e-48aa-9a98-703a5c422d74","Type":"ContainerDied","Data":"11a3d9fc958f0de7f688795c561fa377b5d2c57e4d1afd9e30ed9346a8b442b3"} Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.950663 4687 scope.go:117] "RemoveContainer" containerID="b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.956207 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbmtl" event={"ID":"a63bf54d-d493-4719-9279-57810413d447","Type":"ContainerDied","Data":"a125c5248d51900405ae01312590ca8612bb044d053aebe81b4e8898300eee25"} Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.956291 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbmtl" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.969402 4687 scope.go:117] "RemoveContainer" containerID="c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237" Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.979843 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.982248 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hw49"] Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.995573 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:43:38 crc kubenswrapper[4687]: I1203 17:43:38.998300 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fbmtl"] Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.001633 4687 scope.go:117] "RemoveContainer" containerID="08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.015186 4687 scope.go:117] "RemoveContainer" containerID="b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f" Dec 03 17:43:39 crc kubenswrapper[4687]: E1203 17:43:39.015734 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f\": container with ID starting with b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f not found: ID does not exist" containerID="b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.015781 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f"} err="failed to get container status \"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f\": rpc error: code = NotFound desc = could not find container \"b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f\": container with ID starting with b462a0002c084b679c4e36ef32230312bcdac7476b0c8cfb34e72911585bc03f not found: ID does not exist" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.015807 4687 scope.go:117] "RemoveContainer" containerID="c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237" Dec 03 17:43:39 crc kubenswrapper[4687]: E1203 17:43:39.016408 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237\": container with ID starting with c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237 not found: ID does not exist" containerID="c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.016444 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237"} err="failed to get container status \"c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237\": rpc error: code = NotFound desc = could not find container \"c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237\": container with ID starting with c3b5b264aa1bcc12a67db5217eb7eba863bdf15bbfbaba51318894b33e5a7237 not found: ID does not exist" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.016469 4687 scope.go:117] "RemoveContainer" containerID="08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649" Dec 03 17:43:39 crc kubenswrapper[4687]: E1203 17:43:39.016756 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649\": container with ID starting with 08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649 not found: ID does not exist" containerID="08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.016784 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649"} err="failed to get container status \"08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649\": rpc error: code = NotFound desc = could not find container \"08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649\": container with ID starting with 08a5425ca57a83a1cd12e8512e73da9da54448cc90af13a8e8c855bb93ea4649 not found: ID does not exist" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.016805 4687 scope.go:117] "RemoveContainer" containerID="4b020e722146ec78f997523b1c53cfdc6782267524b068bb13fc65d756787504" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.027805 4687 scope.go:117] "RemoveContainer" containerID="b602ab837670e4f3a6131e7b043b118242020c491c69d340126e9d0420c94e95" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.042832 4687 scope.go:117] "RemoveContainer" containerID="122a0c33d6000873bcf3568f96c14f503ba95334f06e8294115b1cc48f12e5a3" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.415456 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" path="/var/lib/kubelet/pods/9aa3a99c-454e-48aa-9a98-703a5c422d74/volumes" Dec 03 17:43:39 crc kubenswrapper[4687]: I1203 17:43:39.416157 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a63bf54d-d493-4719-9279-57810413d447" path="/var/lib/kubelet/pods/a63bf54d-d493-4719-9279-57810413d447/volumes" Dec 03 17:43:44 crc kubenswrapper[4687]: I1203 17:43:44.112976 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv4n7"] Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.801781 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802043 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802062 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802074 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802080 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802091 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802097 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802107 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802117 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802159 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802168 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802179 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802188 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802202 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802209 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802217 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939c3133-15b9-4300-bd04-10adde0a7bd1" containerName="pruner" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802223 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="939c3133-15b9-4300-bd04-10adde0a7bd1" containerName="pruner" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802232 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802238 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802247 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802253 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="extract-utilities" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802262 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802269 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="extract-content" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802282 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802296 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802308 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802315 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802404 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a63bf54d-d493-4719-9279-57810413d447" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802416 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5362cb96-c834-44db-8cbe-ff42609ebe76" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802425 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a5839ff-9780-447d-b8a3-ea007b4d2a9d" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802434 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="939c3133-15b9-4300-bd04-10adde0a7bd1" containerName="pruner" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802440 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa3a99c-454e-48aa-9a98-703a5c422d74" containerName="registry-server" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.802794 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.802792 4687 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.803224 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.803716 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed" gracePeriod=15 Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.803719 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b" gracePeriod=15 Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.803786 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e" gracePeriod=15 Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.803901 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca" gracePeriod=15 Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804204 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557" gracePeriod=15 Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804300 4687 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804423 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804432 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804446 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804453 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804464 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804470 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804478 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804485 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804492 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804499 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804510 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804521 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 03 17:43:47 crc kubenswrapper[4687]: E1203 17:43:47.804534 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804542 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804633 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804645 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804653 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804661 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804668 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.804850 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925246 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:47 crc kubenswrapper[4687]: I1203 17:43:47.925359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.015603 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017031 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017742 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed" exitCode=0 Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017763 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e" exitCode=0 Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017771 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557" exitCode=0 Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017777 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca" exitCode=2 Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.017815 4687 scope.go:117] "RemoveContainer" containerID="bed778c8c5662061b7fc9f232620b96bd0099b107aa00ea361d9f97235b9cada" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026646 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026689 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026724 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026727 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026810 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026858 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.026966 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.027064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.027071 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.027103 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:48 crc kubenswrapper[4687]: I1203 17:43:48.027151 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:49 crc kubenswrapper[4687]: I1203 17:43:49.025099 4687 generic.go:334] "Generic (PLEG): container finished" podID="499fb078-750b-4623-a979-d6935e5353c8" containerID="1bf7609808a2a47ee70b557bc3d01ab18d7ee234e02d640e3bec0b1f61498ab3" exitCode=0 Dec 03 17:43:49 crc kubenswrapper[4687]: I1203 17:43:49.025178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"499fb078-750b-4623-a979-d6935e5353c8","Type":"ContainerDied","Data":"1bf7609808a2a47ee70b557bc3d01ab18d7ee234e02d640e3bec0b1f61498ab3"} Dec 03 17:43:49 crc kubenswrapper[4687]: I1203 17:43:49.026192 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:49 crc kubenswrapper[4687]: I1203 17:43:49.026747 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:49 crc kubenswrapper[4687]: I1203 17:43:49.028742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.334053 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.335896 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.454927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock\") pod \"499fb078-750b-4623-a979-d6935e5353c8\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir\") pod \"499fb078-750b-4623-a979-d6935e5353c8\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455100 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock" (OuterVolumeSpecName: "var-lock") pod "499fb078-750b-4623-a979-d6935e5353c8" (UID: "499fb078-750b-4623-a979-d6935e5353c8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access\") pod \"499fb078-750b-4623-a979-d6935e5353c8\" (UID: \"499fb078-750b-4623-a979-d6935e5353c8\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "499fb078-750b-4623-a979-d6935e5353c8" (UID: "499fb078-750b-4623-a979-d6935e5353c8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455509 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.455553 4687 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/499fb078-750b-4623-a979-d6935e5353c8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.460845 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "499fb078-750b-4623-a979-d6935e5353c8" (UID: "499fb078-750b-4623-a979-d6935e5353c8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.557204 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/499fb078-750b-4623-a979-d6935e5353c8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.686692 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.688027 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.688989 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.689687 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759198 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759503 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759665 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759292 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.759713 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.760164 4687 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.760239 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:50 crc kubenswrapper[4687]: I1203 17:43:50.760296 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.040508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"499fb078-750b-4623-a979-d6935e5353c8","Type":"ContainerDied","Data":"f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78"} Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.040555 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b8492b605452ebfd2330f1e537bd73a18fe590dae26639a0dc169ad4c20c78" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.040530 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.043915 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.044474 4687 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b" exitCode=0 Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.044517 4687 scope.go:117] "RemoveContainer" containerID="9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.044616 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.066881 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.067305 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.067401 4687 scope.go:117] "RemoveContainer" containerID="3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.067710 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.068056 4687 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.082189 4687 scope.go:117] "RemoveContainer" containerID="b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.095828 4687 scope.go:117] "RemoveContainer" containerID="0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.107867 4687 scope.go:117] "RemoveContainer" containerID="bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.143808 4687 scope.go:117] "RemoveContainer" containerID="8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.172427 4687 scope.go:117] "RemoveContainer" containerID="9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.173110 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\": container with ID starting with 9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed not found: ID does not exist" containerID="9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.173191 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed"} err="failed to get container status \"9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\": rpc error: code = NotFound desc = could not find container \"9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed\": container with ID starting with 9f2e11bfb3d78c3c11249730aae3ca5556109087b38fe6b6752baef59d1999ed not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.173232 4687 scope.go:117] "RemoveContainer" containerID="3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.173659 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\": container with ID starting with 3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e not found: ID does not exist" containerID="3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.173707 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e"} err="failed to get container status \"3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\": rpc error: code = NotFound desc = could not find container \"3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e\": container with ID starting with 3b13bbbb1f9d417fbc033647958c21ae33b8f267dbdacb55eb8c69e64048e89e not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.173736 4687 scope.go:117] "RemoveContainer" containerID="b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.174344 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\": container with ID starting with b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557 not found: ID does not exist" containerID="b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.174378 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557"} err="failed to get container status \"b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\": rpc error: code = NotFound desc = could not find container \"b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557\": container with ID starting with b22f2dcb96afaf9c490ea2fe5374049e54b7440b3c2f3845fc7e9ec5a87d5557 not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.174392 4687 scope.go:117] "RemoveContainer" containerID="0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.174903 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\": container with ID starting with 0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca not found: ID does not exist" containerID="0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.174974 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca"} err="failed to get container status \"0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\": rpc error: code = NotFound desc = could not find container \"0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca\": container with ID starting with 0b4c8e1d2733a13cc59f979fcb7a0a832dc28b69f57e2a52f0e8389b741af6ca not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.175035 4687 scope.go:117] "RemoveContainer" containerID="bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.175848 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\": container with ID starting with bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b not found: ID does not exist" containerID="bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.175894 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b"} err="failed to get container status \"bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\": rpc error: code = NotFound desc = could not find container \"bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b\": container with ID starting with bb00ebdb08f61453423759575ff7cf2334c6a9f5b2e310ebf3685e704e386c5b not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.176309 4687 scope.go:117] "RemoveContainer" containerID="8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087" Dec 03 17:43:51 crc kubenswrapper[4687]: E1203 17:43:51.176797 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\": container with ID starting with 8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087 not found: ID does not exist" containerID="8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.176842 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087"} err="failed to get container status \"8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\": rpc error: code = NotFound desc = could not find container \"8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087\": container with ID starting with 8ccfbdb534e2f52f311aa3ac47834a320bc50698e659379324f332f19c575087 not found: ID does not exist" Dec 03 17:43:51 crc kubenswrapper[4687]: I1203 17:43:51.414558 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 03 17:43:52 crc kubenswrapper[4687]: E1203 17:43:52.828354 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:52 crc kubenswrapper[4687]: I1203 17:43:52.828828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.105287 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.105683 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.106220 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.106622 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.106883 4687 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:54 crc kubenswrapper[4687]: I1203 17:43:54.106929 4687 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.107351 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="200ms" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.308089 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="400ms" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.395345 4687 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.130:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187dc58c1272ce56 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-03 17:43:54.39470959 +0000 UTC m=+267.285405023,LastTimestamp:2025-12-03 17:43:54.39470959 +0000 UTC m=+267.285405023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 03 17:43:54 crc kubenswrapper[4687]: E1203 17:43:54.709188 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="800ms" Dec 03 17:43:55 crc kubenswrapper[4687]: I1203 17:43:55.084974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ad0dea2226f23692efa7570f02e9212188a29fa017c4b68531e0e26c91a05ba4"} Dec 03 17:43:55 crc kubenswrapper[4687]: I1203 17:43:55.085021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1ce68525baab3254b3b909539586d5e569f5f7d5ae166d57dd2797e2c857c392"} Dec 03 17:43:55 crc kubenswrapper[4687]: E1203 17:43:55.085727 4687 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:43:55 crc kubenswrapper[4687]: I1203 17:43:55.085969 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:55 crc kubenswrapper[4687]: E1203 17:43:55.511174 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="1.6s" Dec 03 17:43:57 crc kubenswrapper[4687]: E1203 17:43:57.112111 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="3.2s" Dec 03 17:43:57 crc kubenswrapper[4687]: I1203 17:43:57.411216 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:59 crc kubenswrapper[4687]: I1203 17:43:59.406907 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:59 crc kubenswrapper[4687]: I1203 17:43:59.408190 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:43:59 crc kubenswrapper[4687]: I1203 17:43:59.427252 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:43:59 crc kubenswrapper[4687]: I1203 17:43:59.427504 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:43:59 crc kubenswrapper[4687]: E1203 17:43:59.428198 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:59 crc kubenswrapper[4687]: I1203 17:43:59.428600 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:43:59 crc kubenswrapper[4687]: W1203 17:43:59.453654 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-97db42796ef01acddc73777fba329c5c05d23fbf9336391b5eda1191dd8a8759 WatchSource:0}: Error finding container 97db42796ef01acddc73777fba329c5c05d23fbf9336391b5eda1191dd8a8759: Status 404 returned error can't find the container with id 97db42796ef01acddc73777fba329c5c05d23fbf9336391b5eda1191dd8a8759 Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.126367 4687 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bc902121d7dfa74d132313e701c88cf3f7d5628e197a30b30b66b880fc16e9b9" exitCode=0 Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.126471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bc902121d7dfa74d132313e701c88cf3f7d5628e197a30b30b66b880fc16e9b9"} Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.127297 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"97db42796ef01acddc73777fba329c5c05d23fbf9336391b5eda1191dd8a8759"} Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.127882 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.127910 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:00 crc kubenswrapper[4687]: I1203 17:44:00.128405 4687 status_manager.go:851] "Failed to get status for pod" podUID="499fb078-750b-4623-a979-d6935e5353c8" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" Dec 03 17:44:00 crc kubenswrapper[4687]: E1203 17:44:00.128628 4687 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.130:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:00 crc kubenswrapper[4687]: E1203 17:44:00.313079 4687 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.130:6443: connect: connection refused" interval="6.4s" Dec 03 17:44:01 crc kubenswrapper[4687]: I1203 17:44:01.138108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"637a1c140622d4d9e426101d154e08de8776dad42465aa249d18a41c869562ca"} Dec 03 17:44:01 crc kubenswrapper[4687]: I1203 17:44:01.138427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9b420f30fd8b2bb4c96f59aad120aca4ea2524e84498b683383fb78e046dd1e5"} Dec 03 17:44:01 crc kubenswrapper[4687]: I1203 17:44:01.138438 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e6ca642a4d66160602df0a11416e039eac63c3b0c18280fa1b9c39a0aaa3af22"} Dec 03 17:44:01 crc kubenswrapper[4687]: I1203 17:44:01.138446 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"93d6c421d98e928536d3e0b2a4f4485b9f27700424409804e5a548da9a7b79eb"} Dec 03 17:44:02 crc kubenswrapper[4687]: I1203 17:44:02.154480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5e8d955525d9f9345bb8e099e55df50d5c63bbf0ecf2918a64790d0cfce858ba"} Dec 03 17:44:02 crc kubenswrapper[4687]: I1203 17:44:02.154872 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:02 crc kubenswrapper[4687]: I1203 17:44:02.154978 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:02 crc kubenswrapper[4687]: I1203 17:44:02.155003 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:03 crc kubenswrapper[4687]: I1203 17:44:03.161649 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 17:44:03 crc kubenswrapper[4687]: I1203 17:44:03.161702 4687 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1" exitCode=1 Dec 03 17:44:03 crc kubenswrapper[4687]: I1203 17:44:03.161731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1"} Dec 03 17:44:03 crc kubenswrapper[4687]: I1203 17:44:03.162203 4687 scope.go:117] "RemoveContainer" containerID="3de75f41cc042179ac9dc79c0b78ad64d505c86372a601829c1892d5d58a92f1" Dec 03 17:44:04 crc kubenswrapper[4687]: I1203 17:44:04.169645 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 03 17:44:04 crc kubenswrapper[4687]: I1203 17:44:04.169949 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ab5d9549c053dcc6b1a080e470fe347c8643f683e35547f91261bf54a704030"} Dec 03 17:44:04 crc kubenswrapper[4687]: I1203 17:44:04.429692 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:04 crc kubenswrapper[4687]: I1203 17:44:04.429754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:04 crc kubenswrapper[4687]: I1203 17:44:04.435015 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:07 crc kubenswrapper[4687]: I1203 17:44:07.172911 4687 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:07 crc kubenswrapper[4687]: I1203 17:44:07.419730 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="909ded8a-4bb9-4a12-bc88-326818c56d2d" Dec 03 17:44:08 crc kubenswrapper[4687]: I1203 17:44:08.192767 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:08 crc kubenswrapper[4687]: I1203 17:44:08.192821 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:08 crc kubenswrapper[4687]: I1203 17:44:08.196228 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="909ded8a-4bb9-4a12-bc88-326818c56d2d" Dec 03 17:44:08 crc kubenswrapper[4687]: I1203 17:44:08.198703 4687 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://93d6c421d98e928536d3e0b2a4f4485b9f27700424409804e5a548da9a7b79eb" Dec 03 17:44:08 crc kubenswrapper[4687]: I1203 17:44:08.198739 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:09 crc kubenswrapper[4687]: I1203 17:44:09.138356 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerName="oauth-openshift" containerID="cri-o://9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0" gracePeriod=15 Dec 03 17:44:09 crc kubenswrapper[4687]: I1203 17:44:09.143099 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:44:09 crc kubenswrapper[4687]: I1203 17:44:09.196343 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:09 crc kubenswrapper[4687]: I1203 17:44:09.196374 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:09 crc kubenswrapper[4687]: I1203 17:44:09.199408 4687 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="909ded8a-4bb9-4a12-bc88-326818c56d2d" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.211045 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.216756 4687 generic.go:334] "Generic (PLEG): container finished" podID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerID="9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0" exitCode=0 Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.216800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" event={"ID":"11f7e8b6-ef2e-48ca-b841-f3df95c775be","Type":"ContainerDied","Data":"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0"} Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.216827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" event={"ID":"11f7e8b6-ef2e-48ca-b841-f3df95c775be","Type":"ContainerDied","Data":"25c0190779d503cfb36bd820aff800eb9879df5a67360489ab460ed0638b7b96"} Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.216842 4687 scope.go:117] "RemoveContainer" containerID="9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.243219 4687 scope.go:117] "RemoveContainer" containerID="9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0" Dec 03 17:44:10 crc kubenswrapper[4687]: E1203 17:44:10.243901 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0\": container with ID starting with 9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0 not found: ID does not exist" containerID="9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.243937 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0"} err="failed to get container status \"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0\": rpc error: code = NotFound desc = could not find container \"9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0\": container with ID starting with 9a01366e98d3a4a044008bce8479ede953d52ab15bc69870e74aee058d0f23e0 not found: ID does not exist" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313101 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313179 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313203 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313237 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313282 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313317 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313270 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313422 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhqkn\" (UniqueName: \"kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313452 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313486 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313532 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle\") pod \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\" (UID: \"11f7e8b6-ef2e-48ca-b841-f3df95c775be\") " Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.313784 4687 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.314186 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.314913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.315234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.317985 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.319576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.319994 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.320503 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.321274 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.321752 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn" (OuterVolumeSpecName: "kube-api-access-zhqkn") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "kube-api-access-zhqkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.323544 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.323699 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.326353 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.328208 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "11f7e8b6-ef2e-48ca-b841-f3df95c775be" (UID: "11f7e8b6-ef2e-48ca-b841-f3df95c775be"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.414989 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415038 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415059 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415077 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415097 4687 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415116 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415160 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415181 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415199 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415217 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415237 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415255 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhqkn\" (UniqueName: \"kubernetes.io/projected/11f7e8b6-ef2e-48ca-b841-f3df95c775be-kube-api-access-zhqkn\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:10 crc kubenswrapper[4687]: I1203 17:44:10.415273 4687 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/11f7e8b6-ef2e-48ca-b841-f3df95c775be-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:11 crc kubenswrapper[4687]: I1203 17:44:11.224433 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nv4n7" Dec 03 17:44:11 crc kubenswrapper[4687]: I1203 17:44:11.782396 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:44:11 crc kubenswrapper[4687]: I1203 17:44:11.786567 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:44:13 crc kubenswrapper[4687]: I1203 17:44:13.552774 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 17:44:13 crc kubenswrapper[4687]: I1203 17:44:13.599897 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 17:44:17 crc kubenswrapper[4687]: I1203 17:44:17.333612 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 17:44:17 crc kubenswrapper[4687]: I1203 17:44:17.530765 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 17:44:18 crc kubenswrapper[4687]: I1203 17:44:18.112283 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 03 17:44:18 crc kubenswrapper[4687]: I1203 17:44:18.270078 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 03 17:44:18 crc kubenswrapper[4687]: I1203 17:44:18.608867 4687 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 17:44:18 crc kubenswrapper[4687]: I1203 17:44:18.954783 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 17:44:19 crc kubenswrapper[4687]: I1203 17:44:19.078300 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 03 17:44:19 crc kubenswrapper[4687]: I1203 17:44:19.149539 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 03 17:44:19 crc kubenswrapper[4687]: I1203 17:44:19.558575 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 17:44:19 crc kubenswrapper[4687]: I1203 17:44:19.619560 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.056728 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.068044 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.275226 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.342025 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.591070 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.958908 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.966497 4687 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 03 17:44:20 crc kubenswrapper[4687]: I1203 17:44:20.995836 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.045580 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.098266 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.270941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.285239 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.449384 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.482814 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.514726 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.542346 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.643514 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.685917 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 03 17:44:21 crc kubenswrapper[4687]: I1203 17:44:21.854312 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.128208 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.259628 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.320778 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.453647 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.510440 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.522380 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.640802 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.646348 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.775241 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.788085 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.832502 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 17:44:22 crc kubenswrapper[4687]: I1203 17:44:22.964731 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.040630 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.067730 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.127474 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.127850 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.172893 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.222060 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.286472 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.460081 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.553218 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.613709 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.663234 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.672009 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.691174 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.715221 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.726048 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.745039 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.857896 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 17:44:23 crc kubenswrapper[4687]: I1203 17:44:23.967857 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.118227 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.121920 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.233199 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.237293 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.282741 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.326628 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.338753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.401095 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.420323 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.428061 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.526044 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.530940 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.532787 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.547600 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.567282 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.608824 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.617826 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.622672 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.677404 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.681607 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.806408 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.887341 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.937534 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 17:44:24 crc kubenswrapper[4687]: I1203 17:44:24.939855 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.004979 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.078800 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.086524 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.132369 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.152462 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.153944 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.449431 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.485848 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.524682 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.598041 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.677056 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.681689 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.720771 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.863822 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.919534 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 17:44:25 crc kubenswrapper[4687]: I1203 17:44:25.981614 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.033144 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.056382 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.131076 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.352525 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.354219 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.379637 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.412391 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.441306 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.482075 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.483607 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.500698 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.548032 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.605659 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.678304 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.747389 4687 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.753105 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.754090 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.900949 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.903171 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.929141 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 03 17:44:26 crc kubenswrapper[4687]: I1203 17:44:26.988173 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.039057 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.066148 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.075477 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.075535 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.170725 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.379880 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.421546 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.453636 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.507316 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.536235 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.537940 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.555193 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.616528 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.624347 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.635862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.654178 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.731294 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.797833 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.862529 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.866208 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.901812 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.952523 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.976417 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.977437 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 17:44:27 crc kubenswrapper[4687]: I1203 17:44:27.991162 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.126072 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.191887 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.213191 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.285203 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.341190 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.383190 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.397001 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.508149 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.649559 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.670752 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 17:44:28 crc kubenswrapper[4687]: I1203 17:44:28.886682 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.000270 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.033944 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.045573 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.051080 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.069684 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.086698 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.103847 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.219730 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.251249 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.331402 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.421646 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.421712 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.440868 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.548544 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.637089 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.642548 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.736317 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.765945 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.772603 4687 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.774079 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.873739 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.941061 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 17:44:29 crc kubenswrapper[4687]: I1203 17:44:29.986684 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.017585 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.313682 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.434701 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.529555 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.640779 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.950493 4687 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 17:44:30 crc kubenswrapper[4687]: I1203 17:44:30.957480 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.156616 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.162698 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.246187 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.364084 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.367609 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.388203 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.400190 4687 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.405942 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nv4n7","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406045 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-85fhm","openshift-kube-apiserver/kube-apiserver-crc"] Dec 03 17:44:31 crc kubenswrapper[4687]: E1203 17:44:31.406375 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="499fb078-750b-4623-a979-d6935e5353c8" containerName="installer" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406425 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="499fb078-750b-4623-a979-d6935e5353c8" containerName="installer" Dec 03 17:44:31 crc kubenswrapper[4687]: E1203 17:44:31.406443 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerName="oauth-openshift" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406453 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerName="oauth-openshift" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406590 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="499fb078-750b-4623-a979-d6935e5353c8" containerName="installer" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406606 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" containerName="oauth-openshift" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406723 4687 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.406754 4687 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6587599f-4dc2-4ad2-9a44-2453eae89243" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.407228 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.410259 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.411372 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413413 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413636 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413806 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413859 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413905 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.414018 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413644 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.413999 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.414813 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.422087 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11f7e8b6-ef2e-48ca-b841-f3df95c775be" path="/var/lib/kubelet/pods/11f7e8b6-ef2e-48ca-b841-f3df95c775be/volumes" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.422936 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.424188 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.429783 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.430030 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.437209 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.437192735 podStartE2EDuration="24.437192735s" podCreationTimestamp="2025-12-03 17:44:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:44:31.434999569 +0000 UTC m=+304.325695022" watchObservedRunningTime="2025-12-03 17:44:31.437192735 +0000 UTC m=+304.327888168" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.484511 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486000 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486049 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-policies\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486485 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486560 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486586 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8p8\" (UniqueName: \"kubernetes.io/projected/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-kube-api-access-2f8p8\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486668 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486769 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-dir\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486799 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.486832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.508730 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.518815 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587534 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-dir\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-dir\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587691 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-policies\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587874 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.587973 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588074 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f8p8\" (UniqueName: \"kubernetes.io/projected/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-kube-api-access-2f8p8\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588207 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.588558 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-audit-policies\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.589240 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.590076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-service-ca\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.590767 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.593909 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.593927 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.594288 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-router-certs\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.594527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.594548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.595290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-system-session\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.595553 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-login\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.595641 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-v4-0-config-user-template-error\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.618476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f8p8\" (UniqueName: \"kubernetes.io/projected/87e402f7-d9b1-4dc7-aafe-1628b97d85d8-kube-api-access-2f8p8\") pod \"oauth-openshift-5477954dc8-85fhm\" (UID: \"87e402f7-d9b1-4dc7-aafe-1628b97d85d8\") " pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.669636 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.675286 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.718926 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.731551 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.767514 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.768771 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.807528 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.830520 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 17:44:31 crc kubenswrapper[4687]: I1203 17:44:31.928508 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.119650 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.126537 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5477954dc8-85fhm"] Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.133159 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.191615 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.230453 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.349155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" event={"ID":"87e402f7-d9b1-4dc7-aafe-1628b97d85d8","Type":"ContainerStarted","Data":"6208be5380b79675f0c85d8ba6e33f80f61c493da447b0606844304f657076d3"} Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.520269 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.574839 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.598944 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.780462 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.855098 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.926493 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.949831 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.976727 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 17:44:32 crc kubenswrapper[4687]: I1203 17:44:32.992198 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.007077 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.186801 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.356567 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5477954dc8-85fhm_87e402f7-d9b1-4dc7-aafe-1628b97d85d8/oauth-openshift/0.log" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.356611 4687 generic.go:334] "Generic (PLEG): container finished" podID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" containerID="980e28c5e1de2a23268282df7269d8a11ac72aba3e000f2a17b574f2843f1de7" exitCode=255 Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.356679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" event={"ID":"87e402f7-d9b1-4dc7-aafe-1628b97d85d8","Type":"ContainerDied","Data":"980e28c5e1de2a23268282df7269d8a11ac72aba3e000f2a17b574f2843f1de7"} Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.357075 4687 scope.go:117] "RemoveContainer" containerID="980e28c5e1de2a23268282df7269d8a11ac72aba3e000f2a17b574f2843f1de7" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.357083 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.470181 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.476843 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.566495 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.627207 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.656529 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.845769 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 03 17:44:33 crc kubenswrapper[4687]: I1203 17:44:33.958088 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.222633 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.334310 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.363224 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5477954dc8-85fhm_87e402f7-d9b1-4dc7-aafe-1628b97d85d8/oauth-openshift/1.log" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.363817 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5477954dc8-85fhm_87e402f7-d9b1-4dc7-aafe-1628b97d85d8/oauth-openshift/0.log" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.364031 4687 generic.go:334] "Generic (PLEG): container finished" podID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" exitCode=255 Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.364115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" event={"ID":"87e402f7-d9b1-4dc7-aafe-1628b97d85d8","Type":"ContainerDied","Data":"11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec"} Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.364242 4687 scope.go:117] "RemoveContainer" containerID="980e28c5e1de2a23268282df7269d8a11ac72aba3e000f2a17b574f2843f1de7" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.364736 4687 scope.go:117] "RemoveContainer" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" Dec 03 17:44:34 crc kubenswrapper[4687]: E1203 17:44:34.365078 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5477954dc8-85fhm_openshift-authentication(87e402f7-d9b1-4dc7-aafe-1628b97d85d8)\"" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" podUID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.465151 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.664955 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.726859 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.805847 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 17:44:34 crc kubenswrapper[4687]: I1203 17:44:34.912086 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 17:44:35 crc kubenswrapper[4687]: I1203 17:44:35.202010 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 03 17:44:35 crc kubenswrapper[4687]: I1203 17:44:35.377212 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5477954dc8-85fhm_87e402f7-d9b1-4dc7-aafe-1628b97d85d8/oauth-openshift/1.log" Dec 03 17:44:35 crc kubenswrapper[4687]: I1203 17:44:35.377984 4687 scope.go:117] "RemoveContainer" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" Dec 03 17:44:35 crc kubenswrapper[4687]: E1203 17:44:35.378313 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5477954dc8-85fhm_openshift-authentication(87e402f7-d9b1-4dc7-aafe-1628b97d85d8)\"" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" podUID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" Dec 03 17:44:36 crc kubenswrapper[4687]: I1203 17:44:36.658718 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 17:44:41 crc kubenswrapper[4687]: I1203 17:44:41.137351 4687 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 03 17:44:41 crc kubenswrapper[4687]: I1203 17:44:41.137607 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ad0dea2226f23692efa7570f02e9212188a29fa017c4b68531e0e26c91a05ba4" gracePeriod=5 Dec 03 17:44:41 crc kubenswrapper[4687]: I1203 17:44:41.732167 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:41 crc kubenswrapper[4687]: I1203 17:44:41.733689 4687 scope.go:117] "RemoveContainer" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" Dec 03 17:44:41 crc kubenswrapper[4687]: E1203 17:44:41.734025 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5477954dc8-85fhm_openshift-authentication(87e402f7-d9b1-4dc7-aafe-1628b97d85d8)\"" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" podUID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" Dec 03 17:44:41 crc kubenswrapper[4687]: I1203 17:44:41.734579 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:42 crc kubenswrapper[4687]: I1203 17:44:42.416836 4687 scope.go:117] "RemoveContainer" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" Dec 03 17:44:42 crc kubenswrapper[4687]: E1203 17:44:42.417026 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-5477954dc8-85fhm_openshift-authentication(87e402f7-d9b1-4dc7-aafe-1628b97d85d8)\"" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" podUID="87e402f7-d9b1-4dc7-aafe-1628b97d85d8" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.440217 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.440742 4687 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ad0dea2226f23692efa7570f02e9212188a29fa017c4b68531e0e26c91a05ba4" exitCode=137 Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.703465 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.703536 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849404 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849471 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849502 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849518 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849608 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849625 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849859 4687 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849874 4687 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.849884 4687 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.858155 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.951468 4687 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:46 crc kubenswrapper[4687]: I1203 17:44:46.951549 4687 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 03 17:44:47 crc kubenswrapper[4687]: I1203 17:44:47.416184 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 03 17:44:47 crc kubenswrapper[4687]: I1203 17:44:47.447917 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 03 17:44:47 crc kubenswrapper[4687]: I1203 17:44:47.447998 4687 scope.go:117] "RemoveContainer" containerID="ad0dea2226f23692efa7570f02e9212188a29fa017c4b68531e0e26c91a05ba4" Dec 03 17:44:47 crc kubenswrapper[4687]: I1203 17:44:47.448158 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 03 17:44:53 crc kubenswrapper[4687]: I1203 17:44:53.408095 4687 scope.go:117] "RemoveContainer" containerID="11521a8ac35d058d8946873d4f7b7cd720c129a08eef9b141d57d9edba4ac6ec" Dec 03 17:44:54 crc kubenswrapper[4687]: I1203 17:44:54.497910 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5477954dc8-85fhm_87e402f7-d9b1-4dc7-aafe-1628b97d85d8/oauth-openshift/1.log" Dec 03 17:44:54 crc kubenswrapper[4687]: I1203 17:44:54.499166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" event={"ID":"87e402f7-d9b1-4dc7-aafe-1628b97d85d8","Type":"ContainerStarted","Data":"80f20441e97a4047a9b99fae17eaa26f7f536039c18201c6d5cc31b891caec89"} Dec 03 17:44:54 crc kubenswrapper[4687]: I1203 17:44:54.499587 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:54 crc kubenswrapper[4687]: I1203 17:44:54.504026 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" Dec 03 17:44:54 crc kubenswrapper[4687]: I1203 17:44:54.520021 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5477954dc8-85fhm" podStartSLOduration=70.52000337 podStartE2EDuration="1m10.52000337s" podCreationTimestamp="2025-12-03 17:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:44:54.518861286 +0000 UTC m=+327.409556719" watchObservedRunningTime="2025-12-03 17:44:54.52000337 +0000 UTC m=+327.410698813" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.176850 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5"] Dec 03 17:45:00 crc kubenswrapper[4687]: E1203 17:45:00.177170 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.177187 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.177314 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.177890 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.186425 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.186750 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.196353 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5"] Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.286078 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcxx\" (UniqueName: \"kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.286474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.286646 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.388185 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.388342 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.388428 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcxx\" (UniqueName: \"kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.389154 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.394599 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.406428 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcxx\" (UniqueName: \"kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx\") pod \"collect-profiles-29413065-fnzv5\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.497916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:00 crc kubenswrapper[4687]: I1203 17:45:00.675986 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5"] Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.539494 4687 generic.go:334] "Generic (PLEG): container finished" podID="16a03344-c427-400d-a611-a1be677c58b9" containerID="17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c" exitCode=0 Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.539674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerDied","Data":"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c"} Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.540645 4687 scope.go:117] "RemoveContainer" containerID="17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c" Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.541997 4687 generic.go:334] "Generic (PLEG): container finished" podID="b757c215-9461-4e39-bbd9-aa74875edd28" containerID="8cf72dcb64bf9cae4bfd608bab3a7dfa6fefcaf9118ab63c72b23880bd46883a" exitCode=0 Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.542037 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" event={"ID":"b757c215-9461-4e39-bbd9-aa74875edd28","Type":"ContainerDied","Data":"8cf72dcb64bf9cae4bfd608bab3a7dfa6fefcaf9118ab63c72b23880bd46883a"} Dec 03 17:45:01 crc kubenswrapper[4687]: I1203 17:45:01.542063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" event={"ID":"b757c215-9461-4e39-bbd9-aa74875edd28","Type":"ContainerStarted","Data":"679e08b5b3f83a22322670005fe18be17169221e68dfec5264984f0a5f6a7800"} Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.549435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerStarted","Data":"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89"} Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.550090 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.555439 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.789235 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.922337 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume\") pod \"b757c215-9461-4e39-bbd9-aa74875edd28\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.922550 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjcxx\" (UniqueName: \"kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx\") pod \"b757c215-9461-4e39-bbd9-aa74875edd28\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.922583 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume\") pod \"b757c215-9461-4e39-bbd9-aa74875edd28\" (UID: \"b757c215-9461-4e39-bbd9-aa74875edd28\") " Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.923725 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume" (OuterVolumeSpecName: "config-volume") pod "b757c215-9461-4e39-bbd9-aa74875edd28" (UID: "b757c215-9461-4e39-bbd9-aa74875edd28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.929481 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx" (OuterVolumeSpecName: "kube-api-access-jjcxx") pod "b757c215-9461-4e39-bbd9-aa74875edd28" (UID: "b757c215-9461-4e39-bbd9-aa74875edd28"). InnerVolumeSpecName "kube-api-access-jjcxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:45:02 crc kubenswrapper[4687]: I1203 17:45:02.930638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b757c215-9461-4e39-bbd9-aa74875edd28" (UID: "b757c215-9461-4e39-bbd9-aa74875edd28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.025594 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjcxx\" (UniqueName: \"kubernetes.io/projected/b757c215-9461-4e39-bbd9-aa74875edd28-kube-api-access-jjcxx\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.025659 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b757c215-9461-4e39-bbd9-aa74875edd28-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.025672 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b757c215-9461-4e39-bbd9-aa74875edd28-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.557383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" event={"ID":"b757c215-9461-4e39-bbd9-aa74875edd28","Type":"ContainerDied","Data":"679e08b5b3f83a22322670005fe18be17169221e68dfec5264984f0a5f6a7800"} Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.557770 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679e08b5b3f83a22322670005fe18be17169221e68dfec5264984f0a5f6a7800" Dec 03 17:45:03 crc kubenswrapper[4687]: I1203 17:45:03.557420 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5" Dec 03 17:45:14 crc kubenswrapper[4687]: I1203 17:45:14.112546 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:45:14 crc kubenswrapper[4687]: I1203 17:45:14.113459 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.469161 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.470033 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" podUID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" containerName="controller-manager" containerID="cri-o://a72a4dd18c94b8e9d57406e0bb49a0e2996f1caa1d9e2d0a3c0de07652afe288" gracePeriod=30 Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.589831 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.590072 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" podUID="646228e4-463e-4aed-a466-afb944163282" containerName="route-controller-manager" containerID="cri-o://34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523" gracePeriod=30 Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.696712 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" containerID="a72a4dd18c94b8e9d57406e0bb49a0e2996f1caa1d9e2d0a3c0de07652afe288" exitCode=0 Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.696762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" event={"ID":"f2b2ecfc-7839-4364-9e65-988bb4f666f5","Type":"ContainerDied","Data":"a72a4dd18c94b8e9d57406e0bb49a0e2996f1caa1d9e2d0a3c0de07652afe288"} Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.931684 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.993540 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:45:28 crc kubenswrapper[4687]: E1203 17:45:28.993766 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="646228e4-463e-4aed-a466-afb944163282" containerName="route-controller-manager" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.993781 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="646228e4-463e-4aed-a466-afb944163282" containerName="route-controller-manager" Dec 03 17:45:28 crc kubenswrapper[4687]: E1203 17:45:28.993791 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b757c215-9461-4e39-bbd9-aa74875edd28" containerName="collect-profiles" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.993797 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b757c215-9461-4e39-bbd9-aa74875edd28" containerName="collect-profiles" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.993880 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b757c215-9461-4e39-bbd9-aa74875edd28" containerName="collect-profiles" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.993895 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="646228e4-463e-4aed-a466-afb944163282" containerName="route-controller-manager" Dec 03 17:45:28 crc kubenswrapper[4687]: I1203 17:45:28.994284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.010265 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca\") pod \"646228e4-463e-4aed-a466-afb944163282\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054330 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config\") pod \"646228e4-463e-4aed-a466-afb944163282\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7kx5\" (UniqueName: \"kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5\") pod \"646228e4-463e-4aed-a466-afb944163282\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054410 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert\") pod \"646228e4-463e-4aed-a466-afb944163282\" (UID: \"646228e4-463e-4aed-a466-afb944163282\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054666 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.054728 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsr6\" (UniqueName: \"kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.055032 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca" (OuterVolumeSpecName: "client-ca") pod "646228e4-463e-4aed-a466-afb944163282" (UID: "646228e4-463e-4aed-a466-afb944163282"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.056011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config" (OuterVolumeSpecName: "config") pod "646228e4-463e-4aed-a466-afb944163282" (UID: "646228e4-463e-4aed-a466-afb944163282"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.060168 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5" (OuterVolumeSpecName: "kube-api-access-n7kx5") pod "646228e4-463e-4aed-a466-afb944163282" (UID: "646228e4-463e-4aed-a466-afb944163282"). InnerVolumeSpecName "kube-api-access-n7kx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.060215 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "646228e4-463e-4aed-a466-afb944163282" (UID: "646228e4-463e-4aed-a466-afb944163282"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.155948 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156007 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156069 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsr6\" (UniqueName: \"kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156176 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156300 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/646228e4-463e-4aed-a466-afb944163282-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156669 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7kx5\" (UniqueName: \"kubernetes.io/projected/646228e4-463e-4aed-a466-afb944163282-kube-api-access-n7kx5\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156690 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/646228e4-463e-4aed-a466-afb944163282-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.156989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.158093 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.160332 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.184253 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsr6\" (UniqueName: \"kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6\") pod \"route-controller-manager-54d8bb9649-mdrg5\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.244751 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.313929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.359761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dprss\" (UniqueName: \"kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss\") pod \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.359879 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert\") pod \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.359906 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles\") pod \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.359934 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config\") pod \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.360765 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2b2ecfc-7839-4364-9e65-988bb4f666f5" (UID: "f2b2ecfc-7839-4364-9e65-988bb4f666f5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.360909 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca\") pod \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\" (UID: \"f2b2ecfc-7839-4364-9e65-988bb4f666f5\") " Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.360946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config" (OuterVolumeSpecName: "config") pod "f2b2ecfc-7839-4364-9e65-988bb4f666f5" (UID: "f2b2ecfc-7839-4364-9e65-988bb4f666f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.361314 4687 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.361343 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.361400 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2b2ecfc-7839-4364-9e65-988bb4f666f5" (UID: "f2b2ecfc-7839-4364-9e65-988bb4f666f5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.362586 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss" (OuterVolumeSpecName: "kube-api-access-dprss") pod "f2b2ecfc-7839-4364-9e65-988bb4f666f5" (UID: "f2b2ecfc-7839-4364-9e65-988bb4f666f5"). InnerVolumeSpecName "kube-api-access-dprss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.363184 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2b2ecfc-7839-4364-9e65-988bb4f666f5" (UID: "f2b2ecfc-7839-4364-9e65-988bb4f666f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.464265 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dprss\" (UniqueName: \"kubernetes.io/projected/f2b2ecfc-7839-4364-9e65-988bb4f666f5-kube-api-access-dprss\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.464308 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b2ecfc-7839-4364-9e65-988bb4f666f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.464321 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b2ecfc-7839-4364-9e65-988bb4f666f5-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.505377 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d9fccc5c-gmg42"] Dec 03 17:45:29 crc kubenswrapper[4687]: E1203 17:45:29.505595 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" containerName="controller-manager" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.505606 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" containerName="controller-manager" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.505707 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" containerName="controller-manager" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.506243 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.516640 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d9fccc5c-gmg42"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.574945 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.666954 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d4598-7377-4009-a6c0-e2fad87e60bb-serving-cert\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.667505 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-client-ca\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.667571 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-proxy-ca-bundles\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.667595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-config\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.667673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dlxs\" (UniqueName: \"kubernetes.io/projected/8d2d4598-7377-4009-a6c0-e2fad87e60bb-kube-api-access-8dlxs\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.712780 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" event={"ID":"d73c7be3-f391-4eb4-9e04-6dc7578ff772","Type":"ContainerStarted","Data":"937a2bec4b61cb01dee055071de5bced59bbc0971c171032c784cd92254fd34b"} Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.717308 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" event={"ID":"f2b2ecfc-7839-4364-9e65-988bb4f666f5","Type":"ContainerDied","Data":"81b5a45e7a039c668c0edb420228f371f38b066711d190d607f696d6f56e6c6d"} Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.717369 4687 scope.go:117] "RemoveContainer" containerID="a72a4dd18c94b8e9d57406e0bb49a0e2996f1caa1d9e2d0a3c0de07652afe288" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.717496 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4r92g" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.722602 4687 generic.go:334] "Generic (PLEG): container finished" podID="646228e4-463e-4aed-a466-afb944163282" containerID="34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523" exitCode=0 Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.722638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" event={"ID":"646228e4-463e-4aed-a466-afb944163282","Type":"ContainerDied","Data":"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523"} Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.722659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" event={"ID":"646228e4-463e-4aed-a466-afb944163282","Type":"ContainerDied","Data":"e7849d6ad4806cc26bf10bece769323b957990a06c1fd72750344b9dd6b954aa"} Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.722702 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.737477 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.744178 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4r92g"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.747637 4687 scope.go:117] "RemoveContainer" containerID="34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.748937 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.752405 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5qrlx"] Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.764014 4687 scope.go:117] "RemoveContainer" containerID="34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523" Dec 03 17:45:29 crc kubenswrapper[4687]: E1203 17:45:29.764616 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523\": container with ID starting with 34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523 not found: ID does not exist" containerID="34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.764670 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523"} err="failed to get container status \"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523\": rpc error: code = NotFound desc = could not find container \"34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523\": container with ID starting with 34475b4bf451abbf673c486d8b53ff35d7ffcd11a21ff767a5b120974b7b2523 not found: ID does not exist" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.769439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d4598-7377-4009-a6c0-e2fad87e60bb-serving-cert\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.769478 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-client-ca\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.769553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-config\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.769579 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-proxy-ca-bundles\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.769619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dlxs\" (UniqueName: \"kubernetes.io/projected/8d2d4598-7377-4009-a6c0-e2fad87e60bb-kube-api-access-8dlxs\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.770966 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-proxy-ca-bundles\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.770979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-client-ca\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.771599 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d2d4598-7377-4009-a6c0-e2fad87e60bb-config\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.787145 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d2d4598-7377-4009-a6c0-e2fad87e60bb-serving-cert\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.801714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dlxs\" (UniqueName: \"kubernetes.io/projected/8d2d4598-7377-4009-a6c0-e2fad87e60bb-kube-api-access-8dlxs\") pod \"controller-manager-d9fccc5c-gmg42\" (UID: \"8d2d4598-7377-4009-a6c0-e2fad87e60bb\") " pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.823165 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:29 crc kubenswrapper[4687]: I1203 17:45:29.995762 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d9fccc5c-gmg42"] Dec 03 17:45:30 crc kubenswrapper[4687]: W1203 17:45:30.001008 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d2d4598_7377_4009_a6c0_e2fad87e60bb.slice/crio-012aae71fccb0f9665b026ad3ea4009b509d8cd5d2e586da0d2530a5535b072a WatchSource:0}: Error finding container 012aae71fccb0f9665b026ad3ea4009b509d8cd5d2e586da0d2530a5535b072a: Status 404 returned error can't find the container with id 012aae71fccb0f9665b026ad3ea4009b509d8cd5d2e586da0d2530a5535b072a Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.728911 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" event={"ID":"d73c7be3-f391-4eb4-9e04-6dc7578ff772","Type":"ContainerStarted","Data":"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38"} Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.730207 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.732626 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" event={"ID":"8d2d4598-7377-4009-a6c0-e2fad87e60bb","Type":"ContainerStarted","Data":"7e5348ff033684612df6c41d9646b72ea91dff08579d886eef5c4d90206ea6de"} Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.732656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" event={"ID":"8d2d4598-7377-4009-a6c0-e2fad87e60bb","Type":"ContainerStarted","Data":"012aae71fccb0f9665b026ad3ea4009b509d8cd5d2e586da0d2530a5535b072a"} Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.733249 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.734513 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.737664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.747816 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" podStartSLOduration=2.747798727 podStartE2EDuration="2.747798727s" podCreationTimestamp="2025-12-03 17:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:45:30.746266393 +0000 UTC m=+363.636961836" watchObservedRunningTime="2025-12-03 17:45:30.747798727 +0000 UTC m=+363.638494160" Dec 03 17:45:30 crc kubenswrapper[4687]: I1203 17:45:30.797443 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d9fccc5c-gmg42" podStartSLOduration=1.797426868 podStartE2EDuration="1.797426868s" podCreationTimestamp="2025-12-03 17:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:45:30.792489807 +0000 UTC m=+363.683185240" watchObservedRunningTime="2025-12-03 17:45:30.797426868 +0000 UTC m=+363.688122301" Dec 03 17:45:31 crc kubenswrapper[4687]: I1203 17:45:31.414875 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="646228e4-463e-4aed-a466-afb944163282" path="/var/lib/kubelet/pods/646228e4-463e-4aed-a466-afb944163282/volumes" Dec 03 17:45:31 crc kubenswrapper[4687]: I1203 17:45:31.415986 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b2ecfc-7839-4364-9e65-988bb4f666f5" path="/var/lib/kubelet/pods/f2b2ecfc-7839-4364-9e65-988bb4f666f5/volumes" Dec 03 17:45:44 crc kubenswrapper[4687]: I1203 17:45:44.112073 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:45:44 crc kubenswrapper[4687]: I1203 17:45:44.112440 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.463209 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.463910 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" podUID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" containerName="route-controller-manager" containerID="cri-o://2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38" gracePeriod=30 Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.598992 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mddxv"] Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.599808 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.616442 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mddxv"] Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665597 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0df9b738-72a1-4f8e-9aae-35b83a11c582-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r98w\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-kube-api-access-4r98w\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-tls\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665874 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0df9b738-72a1-4f8e-9aae-35b83a11c582-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665904 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-bound-sa-token\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.665959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.666044 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-trusted-ca\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.666080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-certificates\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.690899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768310 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-certificates\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768454 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0df9b738-72a1-4f8e-9aae-35b83a11c582-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768493 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r98w\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-kube-api-access-4r98w\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768533 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-tls\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0df9b738-72a1-4f8e-9aae-35b83a11c582-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-bound-sa-token\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.768638 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-trusted-ca\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.769264 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0df9b738-72a1-4f8e-9aae-35b83a11c582-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.769852 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-certificates\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.772758 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0df9b738-72a1-4f8e-9aae-35b83a11c582-trusted-ca\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.777541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-registry-tls\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.785997 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0df9b738-72a1-4f8e-9aae-35b83a11c582-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.788693 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-bound-sa-token\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.789826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r98w\" (UniqueName: \"kubernetes.io/projected/0df9b738-72a1-4f8e-9aae-35b83a11c582-kube-api-access-4r98w\") pod \"image-registry-66df7c8f76-mddxv\" (UID: \"0df9b738-72a1-4f8e-9aae-35b83a11c582\") " pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.863036 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.921205 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.935694 4687 generic.go:334] "Generic (PLEG): container finished" podID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" containerID="2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38" exitCode=0 Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.935744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" event={"ID":"d73c7be3-f391-4eb4-9e04-6dc7578ff772","Type":"ContainerDied","Data":"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38"} Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.935806 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.935840 4687 scope.go:117] "RemoveContainer" containerID="2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.935823 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5" event={"ID":"d73c7be3-f391-4eb4-9e04-6dc7578ff772","Type":"ContainerDied","Data":"937a2bec4b61cb01dee055071de5bced59bbc0971c171032c784cd92254fd34b"} Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.950952 4687 scope.go:117] "RemoveContainer" containerID="2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38" Dec 03 17:46:08 crc kubenswrapper[4687]: E1203 17:46:08.951438 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38\": container with ID starting with 2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38 not found: ID does not exist" containerID="2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.951490 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38"} err="failed to get container status \"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38\": rpc error: code = NotFound desc = could not find container \"2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38\": container with ID starting with 2be5ff9e066a69e06fd2a7e6cf702aa811998106cad8ee0d04aba4098cb95f38 not found: ID does not exist" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.972173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca\") pod \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.972334 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config\") pod \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.972402 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jsr6\" (UniqueName: \"kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6\") pod \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.972468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert\") pod \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\" (UID: \"d73c7be3-f391-4eb4-9e04-6dc7578ff772\") " Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.973076 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca" (OuterVolumeSpecName: "client-ca") pod "d73c7be3-f391-4eb4-9e04-6dc7578ff772" (UID: "d73c7be3-f391-4eb4-9e04-6dc7578ff772"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.973093 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config" (OuterVolumeSpecName: "config") pod "d73c7be3-f391-4eb4-9e04-6dc7578ff772" (UID: "d73c7be3-f391-4eb4-9e04-6dc7578ff772"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.976107 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d73c7be3-f391-4eb4-9e04-6dc7578ff772" (UID: "d73c7be3-f391-4eb4-9e04-6dc7578ff772"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:46:08 crc kubenswrapper[4687]: I1203 17:46:08.977698 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6" (OuterVolumeSpecName: "kube-api-access-8jsr6") pod "d73c7be3-f391-4eb4-9e04-6dc7578ff772" (UID: "d73c7be3-f391-4eb4-9e04-6dc7578ff772"). InnerVolumeSpecName "kube-api-access-8jsr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.074027 4687 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-client-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.074059 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c7be3-f391-4eb4-9e04-6dc7578ff772-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.074070 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jsr6\" (UniqueName: \"kubernetes.io/projected/d73c7be3-f391-4eb4-9e04-6dc7578ff772-kube-api-access-8jsr6\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.074082 4687 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c7be3-f391-4eb4-9e04-6dc7578ff772-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.264941 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.271517 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54d8bb9649-mdrg5"] Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.307739 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mddxv"] Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.417700 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" path="/var/lib/kubelet/pods/d73c7be3-f391-4eb4-9e04-6dc7578ff772/volumes" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.713384 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss"] Dec 03 17:46:09 crc kubenswrapper[4687]: E1203 17:46:09.713631 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" containerName="route-controller-manager" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.713642 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" containerName="route-controller-manager" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.713756 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73c7be3-f391-4eb4-9e04-6dc7578ff772" containerName="route-controller-manager" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.714171 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.715857 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.716157 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.717259 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.717401 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.717602 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.718226 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.726308 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss"] Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.785242 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-config\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.785328 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-serving-cert\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.785572 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-client-ca\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.785611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcsnv\" (UniqueName: \"kubernetes.io/projected/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-kube-api-access-pcsnv\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.887050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-serving-cert\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.887633 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-client-ca\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.887672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcsnv\" (UniqueName: \"kubernetes.io/projected/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-kube-api-access-pcsnv\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.887719 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-config\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.889773 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-config\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.889920 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-client-ca\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.897803 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-serving-cert\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.902283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcsnv\" (UniqueName: \"kubernetes.io/projected/ec72e960-10a1-4166-a3f6-d4fefd6a2e2e-kube-api-access-pcsnv\") pod \"route-controller-manager-67d97ffdf9-446ss\" (UID: \"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e\") " pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.943663 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" event={"ID":"0df9b738-72a1-4f8e-9aae-35b83a11c582","Type":"ContainerStarted","Data":"d5d1f39ae913ce852b962d861a95d873e84faafcb3cbce41ef3cb1af625c8fd9"} Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.943710 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" event={"ID":"0df9b738-72a1-4f8e-9aae-35b83a11c582","Type":"ContainerStarted","Data":"e59c3375c2658d5b7229b533124b06925543c1d05f45378e85ec22f7bf07e311"} Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.943825 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:09 crc kubenswrapper[4687]: I1203 17:46:09.960679 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" podStartSLOduration=1.960661046 podStartE2EDuration="1.960661046s" podCreationTimestamp="2025-12-03 17:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:46:09.95983573 +0000 UTC m=+402.850531163" watchObservedRunningTime="2025-12-03 17:46:09.960661046 +0000 UTC m=+402.851356479" Dec 03 17:46:10 crc kubenswrapper[4687]: I1203 17:46:10.030960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:10 crc kubenswrapper[4687]: I1203 17:46:10.435032 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss"] Dec 03 17:46:10 crc kubenswrapper[4687]: W1203 17:46:10.441453 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec72e960_10a1_4166_a3f6_d4fefd6a2e2e.slice/crio-8c1074af2baf09a556b94337af682ebfb4328a1fd4b918901e7c2a76181def20 WatchSource:0}: Error finding container 8c1074af2baf09a556b94337af682ebfb4328a1fd4b918901e7c2a76181def20: Status 404 returned error can't find the container with id 8c1074af2baf09a556b94337af682ebfb4328a1fd4b918901e7c2a76181def20 Dec 03 17:46:10 crc kubenswrapper[4687]: I1203 17:46:10.950742 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" event={"ID":"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e","Type":"ContainerStarted","Data":"8c1074af2baf09a556b94337af682ebfb4328a1fd4b918901e7c2a76181def20"} Dec 03 17:46:11 crc kubenswrapper[4687]: I1203 17:46:11.959177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" event={"ID":"ec72e960-10a1-4166-a3f6-d4fefd6a2e2e","Type":"ContainerStarted","Data":"7366b162bce7e9e5c79ce3c7da306bd285b2d2cb73ed6b94bef14a7e94e8c4c2"} Dec 03 17:46:11 crc kubenswrapper[4687]: I1203 17:46:11.959929 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:11 crc kubenswrapper[4687]: I1203 17:46:11.965007 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" Dec 03 17:46:11 crc kubenswrapper[4687]: I1203 17:46:11.986564 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67d97ffdf9-446ss" podStartSLOduration=3.986546 podStartE2EDuration="3.986546s" podCreationTimestamp="2025-12-03 17:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:46:11.981890689 +0000 UTC m=+404.872586132" watchObservedRunningTime="2025-12-03 17:46:11.986546 +0000 UTC m=+404.877241433" Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.111609 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.111686 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.111739 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.112368 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.112431 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6" gracePeriod=600 Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.975843 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6" exitCode=0 Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.975918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6"} Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.976471 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28"} Dec 03 17:46:14 crc kubenswrapper[4687]: I1203 17:46:14.976493 4687 scope.go:117] "RemoveContainer" containerID="d9174351fa82471c8b46cf1aa5aa8929ddcb165b56db0e2d06d8585631be8398" Dec 03 17:46:28 crc kubenswrapper[4687]: I1203 17:46:28.926390 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mddxv" Dec 03 17:46:28 crc kubenswrapper[4687]: I1203 17:46:28.993317 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.192836 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.193761 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9rknl" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="registry-server" containerID="cri-o://ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f" gracePeriod=30 Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.207549 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.207825 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d59r5" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="registry-server" containerID="cri-o://b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b" gracePeriod=30 Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.218710 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.219012 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" containerID="cri-o://2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89" gracePeriod=30 Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.223702 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.224000 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-clffd" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="registry-server" containerID="cri-o://9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512" gracePeriod=30 Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.236214 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.236503 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4j9rv" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="registry-server" containerID="cri-o://947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b" gracePeriod=30 Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.240612 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6bp27"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.241226 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.250757 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6bp27"] Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.256803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.256891 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687qr\" (UniqueName: \"kubernetes.io/projected/d7aa828b-8739-41ee-bdd4-81f7b5421561-kube-api-access-687qr\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.256961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.359102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.361588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.361768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-687qr\" (UniqueName: \"kubernetes.io/projected/d7aa828b-8739-41ee-bdd4-81f7b5421561-kube-api-access-687qr\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.365360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.379135 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d7aa828b-8739-41ee-bdd4-81f7b5421561-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.384919 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-687qr\" (UniqueName: \"kubernetes.io/projected/d7aa828b-8739-41ee-bdd4-81f7b5421561-kube-api-access-687qr\") pod \"marketplace-operator-79b997595-6bp27\" (UID: \"d7aa828b-8739-41ee-bdd4-81f7b5421561\") " pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.733792 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.738842 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.743063 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.747418 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.754528 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.762895 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767579 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca\") pod \"16a03344-c427-400d-a611-a1be677c58b9\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767633 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb27h\" (UniqueName: \"kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h\") pod \"16a03344-c427-400d-a611-a1be677c58b9\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767719 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content\") pod \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics\") pod \"16a03344-c427-400d-a611-a1be677c58b9\" (UID: \"16a03344-c427-400d-a611-a1be677c58b9\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767833 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brg89\" (UniqueName: \"kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89\") pod \"73547923-4959-473f-b335-f1bccb070d16\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767866 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content\") pod \"73547923-4959-473f-b335-f1bccb070d16\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767897 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities\") pod \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767949 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities\") pod \"73547923-4959-473f-b335-f1bccb070d16\" (UID: \"73547923-4959-473f-b335-f1bccb070d16\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.767970 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzsnb\" (UniqueName: \"kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb\") pod \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\" (UID: \"0a8f332f-4fac-4824-90b9-a922f0bb35c2\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.769743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "16a03344-c427-400d-a611-a1be677c58b9" (UID: "16a03344-c427-400d-a611-a1be677c58b9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.775366 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities" (OuterVolumeSpecName: "utilities") pod "0a8f332f-4fac-4824-90b9-a922f0bb35c2" (UID: "0a8f332f-4fac-4824-90b9-a922f0bb35c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.775948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "16a03344-c427-400d-a611-a1be677c58b9" (UID: "16a03344-c427-400d-a611-a1be677c58b9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.776243 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities" (OuterVolumeSpecName: "utilities") pod "73547923-4959-473f-b335-f1bccb070d16" (UID: "73547923-4959-473f-b335-f1bccb070d16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.788307 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb" (OuterVolumeSpecName: "kube-api-access-nzsnb") pod "0a8f332f-4fac-4824-90b9-a922f0bb35c2" (UID: "0a8f332f-4fac-4824-90b9-a922f0bb35c2"). InnerVolumeSpecName "kube-api-access-nzsnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.789101 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89" (OuterVolumeSpecName: "kube-api-access-brg89") pod "73547923-4959-473f-b335-f1bccb070d16" (UID: "73547923-4959-473f-b335-f1bccb070d16"). InnerVolumeSpecName "kube-api-access-brg89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.814913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h" (OuterVolumeSpecName: "kube-api-access-pb27h") pod "16a03344-c427-400d-a611-a1be677c58b9" (UID: "16a03344-c427-400d-a611-a1be677c58b9"). InnerVolumeSpecName "kube-api-access-pb27h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.873784 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities\") pod \"ad1c1379-bfc3-4496-989d-e24243316f45\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6kfz\" (UniqueName: \"kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz\") pod \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874173 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xskgw\" (UniqueName: \"kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw\") pod \"ad1c1379-bfc3-4496-989d-e24243316f45\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874244 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content\") pod \"ad1c1379-bfc3-4496-989d-e24243316f45\" (UID: \"ad1c1379-bfc3-4496-989d-e24243316f45\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874275 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content\") pod \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities\") pod \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\" (UID: \"4148743d-b671-48a0-b1f0-ad5a3b73a93a\") " Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874530 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brg89\" (UniqueName: \"kubernetes.io/projected/73547923-4959-473f-b335-f1bccb070d16-kube-api-access-brg89\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874542 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874551 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874561 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzsnb\" (UniqueName: \"kubernetes.io/projected/0a8f332f-4fac-4824-90b9-a922f0bb35c2-kube-api-access-nzsnb\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874570 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16a03344-c427-400d-a611-a1be677c58b9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874578 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb27h\" (UniqueName: \"kubernetes.io/projected/16a03344-c427-400d-a611-a1be677c58b9-kube-api-access-pb27h\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.874586 4687 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/16a03344-c427-400d-a611-a1be677c58b9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.876462 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities" (OuterVolumeSpecName: "utilities") pod "4148743d-b671-48a0-b1f0-ad5a3b73a93a" (UID: "4148743d-b671-48a0-b1f0-ad5a3b73a93a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.877680 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities" (OuterVolumeSpecName: "utilities") pod "ad1c1379-bfc3-4496-989d-e24243316f45" (UID: "ad1c1379-bfc3-4496-989d-e24243316f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.881911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz" (OuterVolumeSpecName: "kube-api-access-b6kfz") pod "4148743d-b671-48a0-b1f0-ad5a3b73a93a" (UID: "4148743d-b671-48a0-b1f0-ad5a3b73a93a"). InnerVolumeSpecName "kube-api-access-b6kfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.883472 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw" (OuterVolumeSpecName: "kube-api-access-xskgw") pod "ad1c1379-bfc3-4496-989d-e24243316f45" (UID: "ad1c1379-bfc3-4496-989d-e24243316f45"). InnerVolumeSpecName "kube-api-access-xskgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.891350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73547923-4959-473f-b335-f1bccb070d16" (UID: "73547923-4959-473f-b335-f1bccb070d16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.898568 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad1c1379-bfc3-4496-989d-e24243316f45" (UID: "ad1c1379-bfc3-4496-989d-e24243316f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.940620 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4148743d-b671-48a0-b1f0-ad5a3b73a93a" (UID: "4148743d-b671-48a0-b1f0-ad5a3b73a93a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.948885 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a8f332f-4fac-4824-90b9-a922f0bb35c2" (UID: "0a8f332f-4fac-4824-90b9-a922f0bb35c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976188 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976217 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6kfz\" (UniqueName: \"kubernetes.io/projected/4148743d-b671-48a0-b1f0-ad5a3b73a93a-kube-api-access-b6kfz\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976230 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xskgw\" (UniqueName: \"kubernetes.io/projected/ad1c1379-bfc3-4496-989d-e24243316f45-kube-api-access-xskgw\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976238 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a8f332f-4fac-4824-90b9-a922f0bb35c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976247 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad1c1379-bfc3-4496-989d-e24243316f45-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976254 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976263 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73547923-4959-473f-b335-f1bccb070d16-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:30 crc kubenswrapper[4687]: I1203 17:46:30.976270 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4148743d-b671-48a0-b1f0-ad5a3b73a93a-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.072317 4687 generic.go:334] "Generic (PLEG): container finished" podID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerID="947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b" exitCode=0 Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.072408 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4j9rv" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.072405 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerDied","Data":"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.072591 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4j9rv" event={"ID":"0a8f332f-4fac-4824-90b9-a922f0bb35c2","Type":"ContainerDied","Data":"8f6da813a9bf253667aeae043b9741f0e248fad5d3b7702f75138a63ca1baf7f"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.072625 4687 scope.go:117] "RemoveContainer" containerID="947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.077164 4687 generic.go:334] "Generic (PLEG): container finished" podID="16a03344-c427-400d-a611-a1be677c58b9" containerID="2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89" exitCode=0 Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.077230 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerDied","Data":"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.077254 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" event={"ID":"16a03344-c427-400d-a611-a1be677c58b9","Type":"ContainerDied","Data":"d2ba40971f071f685d512cd57625f2fbc29ee9d4492eb235d1f180156181dc11"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.077254 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-774pl" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.081688 4687 generic.go:334] "Generic (PLEG): container finished" podID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerID="b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b" exitCode=0 Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.081748 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d59r5" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.081787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerDied","Data":"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.081824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d59r5" event={"ID":"4148743d-b671-48a0-b1f0-ad5a3b73a93a","Type":"ContainerDied","Data":"3cff370a485ce7be3b1d4a43a4b2ccfe94c8d63851fd7598c819fa144ef7f76d"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.084620 4687 generic.go:334] "Generic (PLEG): container finished" podID="73547923-4959-473f-b335-f1bccb070d16" containerID="ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f" exitCode=0 Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.084658 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerDied","Data":"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.084683 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9rknl" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.084697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9rknl" event={"ID":"73547923-4959-473f-b335-f1bccb070d16","Type":"ContainerDied","Data":"1d666dce23c36fce9e7365ab71b17a51d46fa6e15954d80a3ca89d1b6a77289b"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.091275 4687 generic.go:334] "Generic (PLEG): container finished" podID="ad1c1379-bfc3-4496-989d-e24243316f45" containerID="9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512" exitCode=0 Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.091425 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerDied","Data":"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.091459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clffd" event={"ID":"ad1c1379-bfc3-4496-989d-e24243316f45","Type":"ContainerDied","Data":"14a4709d69f0046955cdf1c269da0ad08226a4259d52c540cdb33ec7cdf0a3c1"} Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.091518 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clffd" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.097164 4687 scope.go:117] "RemoveContainer" containerID="84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.106340 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.110591 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4j9rv"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.129998 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.133883 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-774pl"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.135757 4687 scope.go:117] "RemoveContainer" containerID="bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.141935 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.146099 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-clffd"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.156546 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.169762 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9rknl"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.172192 4687 scope.go:117] "RemoveContainer" containerID="947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.172765 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b\": container with ID starting with 947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b not found: ID does not exist" containerID="947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.172826 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b"} err="failed to get container status \"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b\": rpc error: code = NotFound desc = could not find container \"947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b\": container with ID starting with 947a5a23cf407022aae12cf58178135e16c8f39868070063c14f9f0e7733a38b not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.172869 4687 scope.go:117] "RemoveContainer" containerID="84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.173303 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9\": container with ID starting with 84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9 not found: ID does not exist" containerID="84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.173364 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9"} err="failed to get container status \"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9\": rpc error: code = NotFound desc = could not find container \"84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9\": container with ID starting with 84c411bc5115f9eb7292055a789d589452c91c2ae034ac157697e5fc3492cce9 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.173396 4687 scope.go:117] "RemoveContainer" containerID="bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.173766 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.173874 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1\": container with ID starting with bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1 not found: ID does not exist" containerID="bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.173906 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1"} err="failed to get container status \"bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1\": rpc error: code = NotFound desc = could not find container \"bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1\": container with ID starting with bf7e7dafefda6634bf603b9b81513eef37d9d992ea86792a3f787a1b770e14e1 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.173925 4687 scope.go:117] "RemoveContainer" containerID="2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.177501 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d59r5"] Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.191509 4687 scope.go:117] "RemoveContainer" containerID="17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.199373 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6bp27"] Dec 03 17:46:31 crc kubenswrapper[4687]: W1203 17:46:31.201766 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7aa828b_8739_41ee_bdd4_81f7b5421561.slice/crio-8087954155daa215b60cb08376e5fc492f07ed5284748f923bec501ec77a671f WatchSource:0}: Error finding container 8087954155daa215b60cb08376e5fc492f07ed5284748f923bec501ec77a671f: Status 404 returned error can't find the container with id 8087954155daa215b60cb08376e5fc492f07ed5284748f923bec501ec77a671f Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.208604 4687 scope.go:117] "RemoveContainer" containerID="2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.209164 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89\": container with ID starting with 2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89 not found: ID does not exist" containerID="2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.209210 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89"} err="failed to get container status \"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89\": rpc error: code = NotFound desc = could not find container \"2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89\": container with ID starting with 2b1201d60088f5276e8e52bd97f471a34890bb96e30d22470a005e62095e4b89 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.209244 4687 scope.go:117] "RemoveContainer" containerID="17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.210210 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c\": container with ID starting with 17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c not found: ID does not exist" containerID="17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.210475 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c"} err="failed to get container status \"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c\": rpc error: code = NotFound desc = could not find container \"17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c\": container with ID starting with 17c5cd4f6cdc7f324fb3a22e72757af3b5b998c18b283502db0390ca96f5b22c not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.210577 4687 scope.go:117] "RemoveContainer" containerID="b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.236068 4687 scope.go:117] "RemoveContainer" containerID="89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.261688 4687 scope.go:117] "RemoveContainer" containerID="74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.301564 4687 scope.go:117] "RemoveContainer" containerID="b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.303364 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b\": container with ID starting with b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b not found: ID does not exist" containerID="b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.303409 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b"} err="failed to get container status \"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b\": rpc error: code = NotFound desc = could not find container \"b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b\": container with ID starting with b5d30cf552577bcb49b3d500b0bdfaefb87c0512397f37fa7f6171c201cb5f6b not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.303446 4687 scope.go:117] "RemoveContainer" containerID="89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.303888 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0\": container with ID starting with 89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0 not found: ID does not exist" containerID="89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.303910 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0"} err="failed to get container status \"89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0\": rpc error: code = NotFound desc = could not find container \"89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0\": container with ID starting with 89a1a1cbcf6c2ba10f0522056809aca0c0a17c250959e644d2914096e957f4b0 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.303926 4687 scope.go:117] "RemoveContainer" containerID="74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.304540 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e\": container with ID starting with 74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e not found: ID does not exist" containerID="74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.304570 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e"} err="failed to get container status \"74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e\": rpc error: code = NotFound desc = could not find container \"74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e\": container with ID starting with 74ee5af19098ec3004369fdacbbe046873dd73e81b5eacc8e60474214d36da2e not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.304589 4687 scope.go:117] "RemoveContainer" containerID="ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.364699 4687 scope.go:117] "RemoveContainer" containerID="bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.383523 4687 scope.go:117] "RemoveContainer" containerID="f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.406400 4687 scope.go:117] "RemoveContainer" containerID="ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.406905 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f\": container with ID starting with ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f not found: ID does not exist" containerID="ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.406936 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f"} err="failed to get container status \"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f\": rpc error: code = NotFound desc = could not find container \"ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f\": container with ID starting with ba9bf4ecd3a115fd57ac4dc14a3b648add43939f6b848fb3ecd9ec7022add88f not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.406960 4687 scope.go:117] "RemoveContainer" containerID="bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.407475 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9\": container with ID starting with bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9 not found: ID does not exist" containerID="bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.407497 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9"} err="failed to get container status \"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9\": rpc error: code = NotFound desc = could not find container \"bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9\": container with ID starting with bec5d09967547a14e3a236f5165411708b40c1b01d67847f9d201f9679765fa9 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.407510 4687 scope.go:117] "RemoveContainer" containerID="f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.407948 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2\": container with ID starting with f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2 not found: ID does not exist" containerID="f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.408027 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2"} err="failed to get container status \"f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2\": rpc error: code = NotFound desc = could not find container \"f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2\": container with ID starting with f99f6318433ca0542383d400552110613dbb062464a7d2186129251ef73712d2 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.408080 4687 scope.go:117] "RemoveContainer" containerID="9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.413238 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" path="/var/lib/kubelet/pods/0a8f332f-4fac-4824-90b9-a922f0bb35c2/volumes" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.413980 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16a03344-c427-400d-a611-a1be677c58b9" path="/var/lib/kubelet/pods/16a03344-c427-400d-a611-a1be677c58b9/volumes" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.414512 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" path="/var/lib/kubelet/pods/4148743d-b671-48a0-b1f0-ad5a3b73a93a/volumes" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.415801 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73547923-4959-473f-b335-f1bccb070d16" path="/var/lib/kubelet/pods/73547923-4959-473f-b335-f1bccb070d16/volumes" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.416512 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" path="/var/lib/kubelet/pods/ad1c1379-bfc3-4496-989d-e24243316f45/volumes" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.427538 4687 scope.go:117] "RemoveContainer" containerID="5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.448204 4687 scope.go:117] "RemoveContainer" containerID="5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.473734 4687 scope.go:117] "RemoveContainer" containerID="9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.475074 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512\": container with ID starting with 9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512 not found: ID does not exist" containerID="9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.475166 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512"} err="failed to get container status \"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512\": rpc error: code = NotFound desc = could not find container \"9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512\": container with ID starting with 9e1ff0b32763e92babfdb5915fe6017d73936c2619e26e98a7721e0058a7b512 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.475190 4687 scope.go:117] "RemoveContainer" containerID="5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.475574 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742\": container with ID starting with 5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742 not found: ID does not exist" containerID="5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.475597 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742"} err="failed to get container status \"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742\": rpc error: code = NotFound desc = could not find container \"5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742\": container with ID starting with 5c7554c27382d78fe56b12deae7b5190a45ca84a4da9eb5471352cfa425f1742 not found: ID does not exist" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.475611 4687 scope.go:117] "RemoveContainer" containerID="5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07" Dec 03 17:46:31 crc kubenswrapper[4687]: E1203 17:46:31.476273 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07\": container with ID starting with 5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07 not found: ID does not exist" containerID="5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07" Dec 03 17:46:31 crc kubenswrapper[4687]: I1203 17:46:31.476318 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07"} err="failed to get container status \"5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07\": rpc error: code = NotFound desc = could not find container \"5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07\": container with ID starting with 5e220cd3d0e19a3104e113a6f6c234c72285b718814b2f9984d24528c34dfe07 not found: ID does not exist" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.011629 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pzzh6"] Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012204 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012223 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012233 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012240 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012252 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012260 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012272 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012279 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012287 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012297 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012310 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012317 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012326 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012332 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012343 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012350 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012360 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012367 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012378 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012385 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012394 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012402 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="extract-utilities" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012415 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012423 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="extract-content" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012431 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012437 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: E1203 17:46:32.012448 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012455 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012554 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012568 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad1c1379-bfc3-4496-989d-e24243316f45" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012579 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4148743d-b671-48a0-b1f0-ad5a3b73a93a" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012587 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8f332f-4fac-4824-90b9-a922f0bb35c2" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012597 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="73547923-4959-473f-b335-f1bccb070d16" containerName="registry-server" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.012786 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="16a03344-c427-400d-a611-a1be677c58b9" containerName="marketplace-operator" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.013392 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.020347 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzzh6"] Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.021763 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.092717 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-utilities\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.092787 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-catalog-content\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.092857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5jv\" (UniqueName: \"kubernetes.io/projected/1756ac21-d3d5-4255-ad09-3c783d85b99f-kube-api-access-dk5jv\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.108752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" event={"ID":"d7aa828b-8739-41ee-bdd4-81f7b5421561","Type":"ContainerStarted","Data":"c4c6dcb85300f8cdf082c447fff4a019c1d5c73b0de8ecaf3d6bbbe156522a4b"} Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.108842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" event={"ID":"d7aa828b-8739-41ee-bdd4-81f7b5421561","Type":"ContainerStarted","Data":"8087954155daa215b60cb08376e5fc492f07ed5284748f923bec501ec77a671f"} Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.109630 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.111681 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.129464 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6bp27" podStartSLOduration=2.129442162 podStartE2EDuration="2.129442162s" podCreationTimestamp="2025-12-03 17:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:46:32.124707168 +0000 UTC m=+425.015402611" watchObservedRunningTime="2025-12-03 17:46:32.129442162 +0000 UTC m=+425.020137595" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.194710 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-utilities\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.194758 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-catalog-content\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.194807 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5jv\" (UniqueName: \"kubernetes.io/projected/1756ac21-d3d5-4255-ad09-3c783d85b99f-kube-api-access-dk5jv\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.196231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-utilities\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.196431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1756ac21-d3d5-4255-ad09-3c783d85b99f-catalog-content\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.212745 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5jv\" (UniqueName: \"kubernetes.io/projected/1756ac21-d3d5-4255-ad09-3c783d85b99f-kube-api-access-dk5jv\") pod \"redhat-marketplace-pzzh6\" (UID: \"1756ac21-d3d5-4255-ad09-3c783d85b99f\") " pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.340441 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:32 crc kubenswrapper[4687]: I1203 17:46:32.569501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pzzh6"] Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.118372 4687 generic.go:334] "Generic (PLEG): container finished" podID="1756ac21-d3d5-4255-ad09-3c783d85b99f" containerID="e2326ba1983896430a5b0c3ffa3fc25edd1498fb931b0e17ab68452b0a0703a4" exitCode=0 Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.119625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzzh6" event={"ID":"1756ac21-d3d5-4255-ad09-3c783d85b99f","Type":"ContainerDied","Data":"e2326ba1983896430a5b0c3ffa3fc25edd1498fb931b0e17ab68452b0a0703a4"} Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.119654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzzh6" event={"ID":"1756ac21-d3d5-4255-ad09-3c783d85b99f","Type":"ContainerStarted","Data":"f73a785fd8afe4689fe6f969a1ad5816e361834e5a60a4710a61f02da4bec77d"} Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.311414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mv6k8"] Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.312347 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.314851 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.333550 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mv6k8"] Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.414095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxdjs\" (UniqueName: \"kubernetes.io/projected/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-kube-api-access-pxdjs\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.414237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-utilities\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.414351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-catalog-content\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.516328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxdjs\" (UniqueName: \"kubernetes.io/projected/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-kube-api-access-pxdjs\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.516425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-utilities\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.516564 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-catalog-content\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.517282 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-utilities\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.517507 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-catalog-content\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.539356 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxdjs\" (UniqueName: \"kubernetes.io/projected/6718dbad-e886-4c4c-b078-7b0ef1d4ee57-kube-api-access-pxdjs\") pod \"certified-operators-mv6k8\" (UID: \"6718dbad-e886-4c4c-b078-7b0ef1d4ee57\") " pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:33 crc kubenswrapper[4687]: I1203 17:46:33.630943 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.071073 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mv6k8"] Dec 03 17:46:34 crc kubenswrapper[4687]: W1203 17:46:34.084514 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6718dbad_e886_4c4c_b078_7b0ef1d4ee57.slice/crio-d7ea0c9e9e2dd60ddd178accf1ea6337eda2a169bd494e1e9ba3d389e62597de WatchSource:0}: Error finding container d7ea0c9e9e2dd60ddd178accf1ea6337eda2a169bd494e1e9ba3d389e62597de: Status 404 returned error can't find the container with id d7ea0c9e9e2dd60ddd178accf1ea6337eda2a169bd494e1e9ba3d389e62597de Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.127674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mv6k8" event={"ID":"6718dbad-e886-4c4c-b078-7b0ef1d4ee57","Type":"ContainerStarted","Data":"d7ea0c9e9e2dd60ddd178accf1ea6337eda2a169bd494e1e9ba3d389e62597de"} Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.222910 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.224021 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.226170 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.230475 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.327837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf77q\" (UniqueName: \"kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.327963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.328013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.428959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf77q\" (UniqueName: \"kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.429057 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.429079 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.429480 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.429561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.449611 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf77q\" (UniqueName: \"kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q\") pod \"redhat-operators-58svh\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.543929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:34 crc kubenswrapper[4687]: I1203 17:46:34.936262 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.135841 4687 generic.go:334] "Generic (PLEG): container finished" podID="1756ac21-d3d5-4255-ad09-3c783d85b99f" containerID="374d6621407c918c1ed17ea99ed21aaa1770b6ab7fd1a302a582744f75ce431d" exitCode=0 Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.135955 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzzh6" event={"ID":"1756ac21-d3d5-4255-ad09-3c783d85b99f","Type":"ContainerDied","Data":"374d6621407c918c1ed17ea99ed21aaa1770b6ab7fd1a302a582744f75ce431d"} Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.138403 4687 generic.go:334] "Generic (PLEG): container finished" podID="6718dbad-e886-4c4c-b078-7b0ef1d4ee57" containerID="dbe29c8e417460cae57b02d3b87b200089aa87a6fcbcb22a418a7da0140fda1f" exitCode=0 Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.138472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mv6k8" event={"ID":"6718dbad-e886-4c4c-b078-7b0ef1d4ee57","Type":"ContainerDied","Data":"dbe29c8e417460cae57b02d3b87b200089aa87a6fcbcb22a418a7da0140fda1f"} Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.139795 4687 generic.go:334] "Generic (PLEG): container finished" podID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerID="c93226976f0f44cd73f99a5cb082ef00716720ede3a8c748e462afca2ffe7382" exitCode=0 Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.139824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerDied","Data":"c93226976f0f44cd73f99a5cb082ef00716720ede3a8c748e462afca2ffe7382"} Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.139840 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerStarted","Data":"012b86c2e8420e0b6b3f48e26209aca9fd46944d2390c8dbbdcfbc3e618ddf92"} Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.615504 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pp8q2"] Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.619561 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.623039 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.624246 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp8q2"] Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.642245 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-catalog-content\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.642342 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbnv\" (UniqueName: \"kubernetes.io/projected/9b4d5812-779c-4a37-bbaf-a9812dd96d93-kube-api-access-xkbnv\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.642388 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-utilities\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.743473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-catalog-content\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.743577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbnv\" (UniqueName: \"kubernetes.io/projected/9b4d5812-779c-4a37-bbaf-a9812dd96d93-kube-api-access-xkbnv\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.743618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-utilities\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.743978 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-catalog-content\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.744023 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4d5812-779c-4a37-bbaf-a9812dd96d93-utilities\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.768031 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbnv\" (UniqueName: \"kubernetes.io/projected/9b4d5812-779c-4a37-bbaf-a9812dd96d93-kube-api-access-xkbnv\") pod \"community-operators-pp8q2\" (UID: \"9b4d5812-779c-4a37-bbaf-a9812dd96d93\") " pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:35 crc kubenswrapper[4687]: I1203 17:46:35.964723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:36 crc kubenswrapper[4687]: I1203 17:46:36.165893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pzzh6" event={"ID":"1756ac21-d3d5-4255-ad09-3c783d85b99f","Type":"ContainerStarted","Data":"2924732e3aab64ad3725230212cf7e18ae6de217f608eef19c868d64c9d44061"} Dec 03 17:46:36 crc kubenswrapper[4687]: I1203 17:46:36.173336 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mv6k8" event={"ID":"6718dbad-e886-4c4c-b078-7b0ef1d4ee57","Type":"ContainerStarted","Data":"ff3c791021531dad21735a5687188f0c6f5a98467f18992779fd2d42f4496170"} Dec 03 17:46:36 crc kubenswrapper[4687]: I1203 17:46:36.176978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerStarted","Data":"42d91f532a9fd1a6eedddc8f4271757923fa606ba5b8cb36e421043572557f4b"} Dec 03 17:46:36 crc kubenswrapper[4687]: I1203 17:46:36.189221 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp8q2"] Dec 03 17:46:36 crc kubenswrapper[4687]: I1203 17:46:36.193367 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pzzh6" podStartSLOduration=2.763374284 podStartE2EDuration="5.193348318s" podCreationTimestamp="2025-12-03 17:46:31 +0000 UTC" firstStartedPulling="2025-12-03 17:46:33.120418792 +0000 UTC m=+426.011114225" lastFinishedPulling="2025-12-03 17:46:35.550392826 +0000 UTC m=+428.441088259" observedRunningTime="2025-12-03 17:46:36.188964192 +0000 UTC m=+429.079659625" watchObservedRunningTime="2025-12-03 17:46:36.193348318 +0000 UTC m=+429.084043761" Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.183681 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b4d5812-779c-4a37-bbaf-a9812dd96d93" containerID="62708cb467df3c4bd55f907fa9cb3e57919cfd6428806e7f0f266c7ecd245b01" exitCode=0 Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.183755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp8q2" event={"ID":"9b4d5812-779c-4a37-bbaf-a9812dd96d93","Type":"ContainerDied","Data":"62708cb467df3c4bd55f907fa9cb3e57919cfd6428806e7f0f266c7ecd245b01"} Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.184268 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp8q2" event={"ID":"9b4d5812-779c-4a37-bbaf-a9812dd96d93","Type":"ContainerStarted","Data":"d848c4f61b1d3d976f05533703c8a2ea4d9b39f74d3aa25fbf3a951a0da012de"} Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.186707 4687 generic.go:334] "Generic (PLEG): container finished" podID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerID="42d91f532a9fd1a6eedddc8f4271757923fa606ba5b8cb36e421043572557f4b" exitCode=0 Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.186799 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerDied","Data":"42d91f532a9fd1a6eedddc8f4271757923fa606ba5b8cb36e421043572557f4b"} Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.193799 4687 generic.go:334] "Generic (PLEG): container finished" podID="6718dbad-e886-4c4c-b078-7b0ef1d4ee57" containerID="ff3c791021531dad21735a5687188f0c6f5a98467f18992779fd2d42f4496170" exitCode=0 Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.193884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mv6k8" event={"ID":"6718dbad-e886-4c4c-b078-7b0ef1d4ee57","Type":"ContainerDied","Data":"ff3c791021531dad21735a5687188f0c6f5a98467f18992779fd2d42f4496170"} Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.193919 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mv6k8" event={"ID":"6718dbad-e886-4c4c-b078-7b0ef1d4ee57","Type":"ContainerStarted","Data":"670960cbb474626fd71ebfb5ceedf73d8027697e0abb7919a87c8d9558224e2d"} Dec 03 17:46:37 crc kubenswrapper[4687]: I1203 17:46:37.219051 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mv6k8" podStartSLOduration=2.6295185500000002 podStartE2EDuration="4.219035391s" podCreationTimestamp="2025-12-03 17:46:33 +0000 UTC" firstStartedPulling="2025-12-03 17:46:35.139586514 +0000 UTC m=+428.030281947" lastFinishedPulling="2025-12-03 17:46:36.729103355 +0000 UTC m=+429.619798788" observedRunningTime="2025-12-03 17:46:37.215399929 +0000 UTC m=+430.106095392" watchObservedRunningTime="2025-12-03 17:46:37.219035391 +0000 UTC m=+430.109730824" Dec 03 17:46:40 crc kubenswrapper[4687]: I1203 17:46:40.216295 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerStarted","Data":"b62de15eab5ca57fb337d0bf438dbc1d74701733d9c6e58e4772d17b23b55ebf"} Dec 03 17:46:40 crc kubenswrapper[4687]: I1203 17:46:40.219571 4687 generic.go:334] "Generic (PLEG): container finished" podID="9b4d5812-779c-4a37-bbaf-a9812dd96d93" containerID="a9a5d9fd37869cacfa4c8f54c560610dca839f48e9cfe6dbff9256b16119f9dc" exitCode=0 Dec 03 17:46:40 crc kubenswrapper[4687]: I1203 17:46:40.219612 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp8q2" event={"ID":"9b4d5812-779c-4a37-bbaf-a9812dd96d93","Type":"ContainerDied","Data":"a9a5d9fd37869cacfa4c8f54c560610dca839f48e9cfe6dbff9256b16119f9dc"} Dec 03 17:46:40 crc kubenswrapper[4687]: I1203 17:46:40.237778 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-58svh" podStartSLOduration=2.490231992 podStartE2EDuration="6.237759553s" podCreationTimestamp="2025-12-03 17:46:34 +0000 UTC" firstStartedPulling="2025-12-03 17:46:35.141628264 +0000 UTC m=+428.032323697" lastFinishedPulling="2025-12-03 17:46:38.889155825 +0000 UTC m=+431.779851258" observedRunningTime="2025-12-03 17:46:40.232587761 +0000 UTC m=+433.123283214" watchObservedRunningTime="2025-12-03 17:46:40.237759553 +0000 UTC m=+433.128454986" Dec 03 17:46:41 crc kubenswrapper[4687]: I1203 17:46:41.227608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp8q2" event={"ID":"9b4d5812-779c-4a37-bbaf-a9812dd96d93","Type":"ContainerStarted","Data":"e8b0fffa370dfa9b1b006018582c18560ebb59e92b1aac2b7eae8f5beaaf1971"} Dec 03 17:46:42 crc kubenswrapper[4687]: I1203 17:46:42.342028 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:42 crc kubenswrapper[4687]: I1203 17:46:42.342284 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:42 crc kubenswrapper[4687]: I1203 17:46:42.399307 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:42 crc kubenswrapper[4687]: I1203 17:46:42.419711 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pp8q2" podStartSLOduration=3.969141525 podStartE2EDuration="7.419694372s" podCreationTimestamp="2025-12-03 17:46:35 +0000 UTC" firstStartedPulling="2025-12-03 17:46:37.185186375 +0000 UTC m=+430.075881808" lastFinishedPulling="2025-12-03 17:46:40.635739222 +0000 UTC m=+433.526434655" observedRunningTime="2025-12-03 17:46:41.258716232 +0000 UTC m=+434.149411675" watchObservedRunningTime="2025-12-03 17:46:42.419694372 +0000 UTC m=+435.310389805" Dec 03 17:46:43 crc kubenswrapper[4687]: I1203 17:46:43.295567 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pzzh6" Dec 03 17:46:43 crc kubenswrapper[4687]: I1203 17:46:43.631921 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:43 crc kubenswrapper[4687]: I1203 17:46:43.631985 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:43 crc kubenswrapper[4687]: I1203 17:46:43.675085 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:44 crc kubenswrapper[4687]: I1203 17:46:44.279283 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mv6k8" Dec 03 17:46:44 crc kubenswrapper[4687]: I1203 17:46:44.544482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:44 crc kubenswrapper[4687]: I1203 17:46:44.544551 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:44 crc kubenswrapper[4687]: I1203 17:46:44.583738 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:45 crc kubenswrapper[4687]: I1203 17:46:45.290745 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 17:46:45 crc kubenswrapper[4687]: I1203 17:46:45.965908 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:45 crc kubenswrapper[4687]: I1203 17:46:45.965998 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:46 crc kubenswrapper[4687]: I1203 17:46:46.008792 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:46 crc kubenswrapper[4687]: I1203 17:46:46.289920 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pp8q2" Dec 03 17:46:54 crc kubenswrapper[4687]: I1203 17:46:54.044296 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" podUID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" containerName="registry" containerID="cri-o://799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca" gracePeriod=30 Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.085789 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171453 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171510 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171639 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171664 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rdj\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171696 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171724 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.171789 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token\") pod \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\" (UID: \"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6\") " Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.172705 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.173491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.176875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.177291 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj" (OuterVolumeSpecName: "kube-api-access-22rdj") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "kube-api-access-22rdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.177549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.177844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.186793 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.192318 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" (UID: "dbb0f835-c087-42ce-b8ef-a822e8d1a3b6"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273407 4687 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273443 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273452 4687 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273461 4687 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273469 4687 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273478 4687 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.273486 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22rdj\" (UniqueName: \"kubernetes.io/projected/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6-kube-api-access-22rdj\") on node \"crc\" DevicePath \"\"" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.317041 4687 generic.go:334] "Generic (PLEG): container finished" podID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" containerID="799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca" exitCode=0 Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.317092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" event={"ID":"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6","Type":"ContainerDied","Data":"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca"} Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.317138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" event={"ID":"dbb0f835-c087-42ce-b8ef-a822e8d1a3b6","Type":"ContainerDied","Data":"3459d02a369846552269161396f67d71675344f9a5c847d6fc634b44d2ba4a80"} Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.317155 4687 scope.go:117] "RemoveContainer" containerID="799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.317077 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-gg6bm" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.333291 4687 scope.go:117] "RemoveContainer" containerID="799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca" Dec 03 17:46:57 crc kubenswrapper[4687]: E1203 17:46:57.335239 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca\": container with ID starting with 799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca not found: ID does not exist" containerID="799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.335295 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca"} err="failed to get container status \"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca\": rpc error: code = NotFound desc = could not find container \"799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca\": container with ID starting with 799aad54691818ab3e1830ac5d4cdfe74feae5e987d6acd5fadb30dd3f9595ca not found: ID does not exist" Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.352378 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.357068 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-gg6bm"] Dec 03 17:46:57 crc kubenswrapper[4687]: I1203 17:46:57.414837 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" path="/var/lib/kubelet/pods/dbb0f835-c087-42ce-b8ef-a822e8d1a3b6/volumes" Dec 03 17:48:14 crc kubenswrapper[4687]: I1203 17:48:14.111951 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:48:14 crc kubenswrapper[4687]: I1203 17:48:14.112408 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:48:44 crc kubenswrapper[4687]: I1203 17:48:44.111415 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:48:44 crc kubenswrapper[4687]: I1203 17:48:44.111978 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.112676 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.113389 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.113441 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.114089 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.114171 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28" gracePeriod=600 Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.445257 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28" exitCode=0 Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.445341 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28"} Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.445560 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273"} Dec 03 17:49:14 crc kubenswrapper[4687]: I1203 17:49:14.445582 4687 scope.go:117] "RemoveContainer" containerID="830dea32fbec17f41ad28fddfaf773cf970c307273af21e7663ef8a4b33a9fd6" Dec 03 17:51:14 crc kubenswrapper[4687]: I1203 17:51:14.111942 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:51:14 crc kubenswrapper[4687]: I1203 17:51:14.112632 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:51:44 crc kubenswrapper[4687]: I1203 17:51:44.111902 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:51:44 crc kubenswrapper[4687]: I1203 17:51:44.113224 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.976642 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-f2zqs"] Dec 03 17:52:06 crc kubenswrapper[4687]: E1203 17:52:06.977460 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" containerName="registry" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.977478 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" containerName="registry" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.977619 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb0f835-c087-42ce-b8ef-a822e8d1a3b6" containerName="registry" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.978134 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-f2zqs" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.981044 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jh2fr" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.981330 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.984520 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pxzt9"] Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.985450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.986833 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.994948 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-f2zqs"] Dec 03 17:52:06 crc kubenswrapper[4687]: I1203 17:52:06.996086 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wjc76" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.005688 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pxzt9"] Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.009860 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-894dz"] Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.010560 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.012297 4687 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2bfj5" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.022375 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-894dz"] Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.120191 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwmj\" (UniqueName: \"kubernetes.io/projected/ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43-kube-api-access-2jwmj\") pod \"cert-manager-webhook-5655c58dd6-894dz\" (UID: \"ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.120468 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6ll\" (UniqueName: \"kubernetes.io/projected/91a9f246-dffa-4891-a4b8-91962e0bdbad-kube-api-access-9d6ll\") pod \"cert-manager-cainjector-7f985d654d-pxzt9\" (UID: \"91a9f246-dffa-4891-a4b8-91962e0bdbad\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.120589 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tz92\" (UniqueName: \"kubernetes.io/projected/46718bd5-eda0-473f-ba31-97f2a591fefe-kube-api-access-7tz92\") pod \"cert-manager-5b446d88c5-f2zqs\" (UID: \"46718bd5-eda0-473f-ba31-97f2a591fefe\") " pod="cert-manager/cert-manager-5b446d88c5-f2zqs" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.222277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jwmj\" (UniqueName: \"kubernetes.io/projected/ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43-kube-api-access-2jwmj\") pod \"cert-manager-webhook-5655c58dd6-894dz\" (UID: \"ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.222669 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6ll\" (UniqueName: \"kubernetes.io/projected/91a9f246-dffa-4891-a4b8-91962e0bdbad-kube-api-access-9d6ll\") pod \"cert-manager-cainjector-7f985d654d-pxzt9\" (UID: \"91a9f246-dffa-4891-a4b8-91962e0bdbad\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.222806 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tz92\" (UniqueName: \"kubernetes.io/projected/46718bd5-eda0-473f-ba31-97f2a591fefe-kube-api-access-7tz92\") pod \"cert-manager-5b446d88c5-f2zqs\" (UID: \"46718bd5-eda0-473f-ba31-97f2a591fefe\") " pod="cert-manager/cert-manager-5b446d88c5-f2zqs" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.242366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6ll\" (UniqueName: \"kubernetes.io/projected/91a9f246-dffa-4891-a4b8-91962e0bdbad-kube-api-access-9d6ll\") pod \"cert-manager-cainjector-7f985d654d-pxzt9\" (UID: \"91a9f246-dffa-4891-a4b8-91962e0bdbad\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.242700 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tz92\" (UniqueName: \"kubernetes.io/projected/46718bd5-eda0-473f-ba31-97f2a591fefe-kube-api-access-7tz92\") pod \"cert-manager-5b446d88c5-f2zqs\" (UID: \"46718bd5-eda0-473f-ba31-97f2a591fefe\") " pod="cert-manager/cert-manager-5b446d88c5-f2zqs" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.243478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jwmj\" (UniqueName: \"kubernetes.io/projected/ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43-kube-api-access-2jwmj\") pod \"cert-manager-webhook-5655c58dd6-894dz\" (UID: \"ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.305593 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-f2zqs" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.305755 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.321757 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.523702 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-pxzt9"] Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.536894 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.765281 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-f2zqs"] Dec 03 17:52:07 crc kubenswrapper[4687]: W1203 17:52:07.765717 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46718bd5_eda0_473f_ba31_97f2a591fefe.slice/crio-7a4bb20e39d7947d579cbecccd017ff2d3a1e5c048c1b33bd9320bd843164daf WatchSource:0}: Error finding container 7a4bb20e39d7947d579cbecccd017ff2d3a1e5c048c1b33bd9320bd843164daf: Status 404 returned error can't find the container with id 7a4bb20e39d7947d579cbecccd017ff2d3a1e5c048c1b33bd9320bd843164daf Dec 03 17:52:07 crc kubenswrapper[4687]: I1203 17:52:07.782912 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-894dz"] Dec 03 17:52:07 crc kubenswrapper[4687]: W1203 17:52:07.788562 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca5b85a2_69d2_428e_9c2a_9e1fdcff7b43.slice/crio-aee7943bd75bfddf9863d82e72135f026739cabecdbb551510a054394b81a997 WatchSource:0}: Error finding container aee7943bd75bfddf9863d82e72135f026739cabecdbb551510a054394b81a997: Status 404 returned error can't find the container with id aee7943bd75bfddf9863d82e72135f026739cabecdbb551510a054394b81a997 Dec 03 17:52:08 crc kubenswrapper[4687]: I1203 17:52:08.434080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" event={"ID":"91a9f246-dffa-4891-a4b8-91962e0bdbad","Type":"ContainerStarted","Data":"013e7af8329d4f7798f1bc71383004467800c7395289718b5d272a71757478c1"} Dec 03 17:52:08 crc kubenswrapper[4687]: I1203 17:52:08.435884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-f2zqs" event={"ID":"46718bd5-eda0-473f-ba31-97f2a591fefe","Type":"ContainerStarted","Data":"7a4bb20e39d7947d579cbecccd017ff2d3a1e5c048c1b33bd9320bd843164daf"} Dec 03 17:52:08 crc kubenswrapper[4687]: I1203 17:52:08.437423 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" event={"ID":"ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43","Type":"ContainerStarted","Data":"aee7943bd75bfddf9863d82e72135f026739cabecdbb551510a054394b81a997"} Dec 03 17:52:09 crc kubenswrapper[4687]: I1203 17:52:09.941216 4687 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 17:52:10 crc kubenswrapper[4687]: I1203 17:52:10.446781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" event={"ID":"91a9f246-dffa-4891-a4b8-91962e0bdbad","Type":"ContainerStarted","Data":"d3284b1fc3ab67213c2d7b4328025a48c6ea5de5b4d81c4f780321227b6881e6"} Dec 03 17:52:10 crc kubenswrapper[4687]: I1203 17:52:10.463756 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-pxzt9" podStartSLOduration=2.661250491 podStartE2EDuration="4.463719728s" podCreationTimestamp="2025-12-03 17:52:06 +0000 UTC" firstStartedPulling="2025-12-03 17:52:07.536630091 +0000 UTC m=+760.427325524" lastFinishedPulling="2025-12-03 17:52:09.339099288 +0000 UTC m=+762.229794761" observedRunningTime="2025-12-03 17:52:10.460677076 +0000 UTC m=+763.351372509" watchObservedRunningTime="2025-12-03 17:52:10.463719728 +0000 UTC m=+763.354415161" Dec 03 17:52:11 crc kubenswrapper[4687]: I1203 17:52:11.453151 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-f2zqs" event={"ID":"46718bd5-eda0-473f-ba31-97f2a591fefe","Type":"ContainerStarted","Data":"0c8f4dbf5e6216a4672200c7b8722f5a71e10e690459a69e005b9bfb0d92e1b2"} Dec 03 17:52:11 crc kubenswrapper[4687]: I1203 17:52:11.454529 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" event={"ID":"ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43","Type":"ContainerStarted","Data":"eb03b2c181be8c2a161681a96178ef1acd08787f307d34630d68bb849f845049"} Dec 03 17:52:11 crc kubenswrapper[4687]: I1203 17:52:11.494290 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" podStartSLOduration=2.204820697 podStartE2EDuration="5.494261588s" podCreationTimestamp="2025-12-03 17:52:06 +0000 UTC" firstStartedPulling="2025-12-03 17:52:07.791223834 +0000 UTC m=+760.681919267" lastFinishedPulling="2025-12-03 17:52:11.080664725 +0000 UTC m=+763.971360158" observedRunningTime="2025-12-03 17:52:11.49286312 +0000 UTC m=+764.383558553" watchObservedRunningTime="2025-12-03 17:52:11.494261588 +0000 UTC m=+764.384957071" Dec 03 17:52:11 crc kubenswrapper[4687]: I1203 17:52:11.497994 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-f2zqs" podStartSLOduration=2.183863328 podStartE2EDuration="5.497974769s" podCreationTimestamp="2025-12-03 17:52:06 +0000 UTC" firstStartedPulling="2025-12-03 17:52:07.767646104 +0000 UTC m=+760.658341537" lastFinishedPulling="2025-12-03 17:52:11.081757535 +0000 UTC m=+763.972452978" observedRunningTime="2025-12-03 17:52:11.47148379 +0000 UTC m=+764.362179223" watchObservedRunningTime="2025-12-03 17:52:11.497974769 +0000 UTC m=+764.388670222" Dec 03 17:52:12 crc kubenswrapper[4687]: I1203 17:52:12.321901 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.111875 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.112341 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.112411 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.113077 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.113172 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273" gracePeriod=600 Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.473668 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273" exitCode=0 Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.473711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273"} Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.474093 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0"} Dec 03 17:52:14 crc kubenswrapper[4687]: I1203 17:52:14.474150 4687 scope.go:117] "RemoveContainer" containerID="a44bca3b334d1f1acdc92525e8a8a678e3debaa223bb0727f5438679d7038c28" Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.324792 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-894dz" Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.480738 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-668q2"] Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481314 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-controller" containerID="cri-o://5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481394 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="nbdb" containerID="cri-o://d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481478 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-acl-logging" containerID="cri-o://5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481509 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="northd" containerID="cri-o://ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481507 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-node" containerID="cri-o://5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481657 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.481475 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="sbdb" containerID="cri-o://18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" gracePeriod=30 Dec 03 17:52:17 crc kubenswrapper[4687]: I1203 17:52:17.519886 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" containerID="cri-o://ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" gracePeriod=30 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.236669 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/3.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.240434 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovn-acl-logging/0.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.241324 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovn-controller/0.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.241913 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.318703 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdzk"] Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.318946 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="northd" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.318959 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="northd" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.318972 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="nbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.318980 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="nbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.318992 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kubecfg-setup" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319000 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kubecfg-setup" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319015 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-acl-logging" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319023 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-acl-logging" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319031 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319039 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319049 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="sbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319057 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="sbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319066 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319074 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319085 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319092 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319106 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-node" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319149 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-node" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319158 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319166 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319177 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319185 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319297 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="sbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319311 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-acl-logging" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319324 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-node" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319335 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319344 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319353 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="nbdb" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319363 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319375 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="northd" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319384 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319393 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovn-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319522 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319532 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.319544 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319553 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319663 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.319676 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerName="ovnkube-controller" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.326263 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395344 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395389 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395423 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395424 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395473 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395581 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395562 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395671 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log" (OuterVolumeSpecName: "node-log") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395743 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.395876 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396011 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396274 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396315 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396709 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396740 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396765 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kjw5\" (UniqueName: \"kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396793 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396828 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket" (OuterVolumeSpecName: "log-socket") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396844 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396863 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396878 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash" (OuterVolumeSpecName: "host-slash") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396892 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396911 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396942 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396945 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396957 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.396971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch\") pod \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\" (UID: \"f7fe22da-1ea3-49ba-b2c6-851ff064db76\") " Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397304 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-kubelet\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-node-log\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397367 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-ovn\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397442 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-var-lib-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397460 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-systemd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-script-lib\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397511 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-etc-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397529 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-netns\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397544 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-slash\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397565 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmt54\" (UniqueName: \"kubernetes.io/projected/fde134bd-9ad7-4039-a9d5-33abf54eba24-kube-api-access-gmt54\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-env-overrides\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397658 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-netd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397682 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-config\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397732 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-bin\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397874 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-log-socket\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397908 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.397937 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-systemd-units\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398141 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398152 4687 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398161 4687 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398171 4687 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398180 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398189 4687 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398197 4687 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398205 4687 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-node-log\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398213 4687 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398221 4687 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398229 4687 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398238 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398246 4687 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-log-socket\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398254 4687 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-slash\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398263 4687 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398270 4687 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.398279 4687 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.401027 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.401236 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5" (OuterVolumeSpecName: "kube-api-access-6kjw5") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "kube-api-access-6kjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.409138 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f7fe22da-1ea3-49ba-b2c6-851ff064db76" (UID: "f7fe22da-1ea3-49ba-b2c6-851ff064db76"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499234 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmt54\" (UniqueName: \"kubernetes.io/projected/fde134bd-9ad7-4039-a9d5-33abf54eba24-kube-api-access-gmt54\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499296 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-env-overrides\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499348 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-netd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-config\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499432 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-bin\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499472 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-log-socket\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499527 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-systemd-units\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499563 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-kubelet\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499577 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-node-log\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-ovn\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499707 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-var-lib-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-systemd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-script-lib\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-etc-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-netns\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499895 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-slash\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-kubelet\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499934 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7fe22da-1ea3-49ba-b2c6-851ff064db76-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499946 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kjw5\" (UniqueName: \"kubernetes.io/projected/f7fe22da-1ea3-49ba-b2c6-851ff064db76-kube-api-access-6kjw5\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499955 4687 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f7fe22da-1ea3-49ba-b2c6-851ff064db76-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.499982 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-slash\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500008 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-node-log\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-bin\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500230 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-systemd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-env-overrides\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500359 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-systemd-units\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500398 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-log-socket\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500428 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500462 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-run-ovn\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500568 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-etc-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-netns\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-cni-netd\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.500922 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.501057 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fde134bd-9ad7-4039-a9d5-33abf54eba24-var-lib-openvswitch\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.501278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-script-lib\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.501561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovnkube-config\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.503106 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fde134bd-9ad7-4039-a9d5-33abf54eba24-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.509662 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/2.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.510186 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/1.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.510330 4687 generic.go:334] "Generic (PLEG): container finished" podID="ede1a722-2df8-433e-b8be-82c434be7d02" containerID="c8b065c74150d6815ce9b20a20e9ba6c3845bb6ae5f88984b267fd3ee16190d9" exitCode=2 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.510441 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerDied","Data":"c8b065c74150d6815ce9b20a20e9ba6c3845bb6ae5f88984b267fd3ee16190d9"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.510529 4687 scope.go:117] "RemoveContainer" containerID="d8965277ada46b7fa28ace85aad6d4b8ca009879e987966be8c94f944a706870" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.510951 4687 scope.go:117] "RemoveContainer" containerID="c8b065c74150d6815ce9b20a20e9ba6c3845bb6ae5f88984b267fd3ee16190d9" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.513853 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovnkube-controller/3.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.516508 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmt54\" (UniqueName: \"kubernetes.io/projected/fde134bd-9ad7-4039-a9d5-33abf54eba24-kube-api-access-gmt54\") pod \"ovnkube-node-7fdzk\" (UID: \"fde134bd-9ad7-4039-a9d5-33abf54eba24\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.516599 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovn-acl-logging/0.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.517775 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-668q2_f7fe22da-1ea3-49ba-b2c6-851ff064db76/ovn-controller/0.log" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518088 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518110 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518123 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518166 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518175 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518183 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" exitCode=0 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518191 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" exitCode=143 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518198 4687 generic.go:334] "Generic (PLEG): container finished" podID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" exitCode=143 Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518242 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518255 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518337 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518351 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518359 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518365 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518374 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518380 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518401 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518405 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518509 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518527 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518533 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518551 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518612 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518621 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518647 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518672 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518678 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518685 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518691 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518695 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518700 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518705 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518716 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518726 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518733 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518738 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518743 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518748 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518753 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518757 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518762 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518767 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518771 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-668q2" event={"ID":"f7fe22da-1ea3-49ba-b2c6-851ff064db76","Type":"ContainerDied","Data":"74f087e99c31a3e0a1bd0519a026b43b9cb105eed5b44f6261647bc63f7809be"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518785 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518792 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518798 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518802 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518808 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518812 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518817 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518822 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518827 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.518832 4687 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.542162 4687 scope.go:117] "RemoveContainer" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.562170 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-668q2"] Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.565652 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-668q2"] Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.565858 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.580750 4687 scope.go:117] "RemoveContainer" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.595626 4687 scope.go:117] "RemoveContainer" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.638939 4687 scope.go:117] "RemoveContainer" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.660421 4687 scope.go:117] "RemoveContainer" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.662114 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.678812 4687 scope.go:117] "RemoveContainer" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: W1203 17:52:18.684354 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfde134bd_9ad7_4039_a9d5_33abf54eba24.slice/crio-d74bf882d78f2b314865b9976b841820223d39967d89194ecaf6ba09d3608fdb WatchSource:0}: Error finding container d74bf882d78f2b314865b9976b841820223d39967d89194ecaf6ba09d3608fdb: Status 404 returned error can't find the container with id d74bf882d78f2b314865b9976b841820223d39967d89194ecaf6ba09d3608fdb Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.708114 4687 scope.go:117] "RemoveContainer" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.743106 4687 scope.go:117] "RemoveContainer" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.757102 4687 scope.go:117] "RemoveContainer" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.787145 4687 scope.go:117] "RemoveContainer" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.787673 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": container with ID starting with ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f not found: ID does not exist" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.787720 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} err="failed to get container status \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": rpc error: code = NotFound desc = could not find container \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": container with ID starting with ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.787752 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.788105 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": container with ID starting with 1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2 not found: ID does not exist" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.788192 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} err="failed to get container status \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": rpc error: code = NotFound desc = could not find container \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": container with ID starting with 1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.788217 4687 scope.go:117] "RemoveContainer" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.788678 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": container with ID starting with 18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa not found: ID does not exist" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.788714 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} err="failed to get container status \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": rpc error: code = NotFound desc = could not find container \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": container with ID starting with 18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.788738 4687 scope.go:117] "RemoveContainer" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.789044 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": container with ID starting with d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68 not found: ID does not exist" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.789076 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} err="failed to get container status \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": rpc error: code = NotFound desc = could not find container \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": container with ID starting with d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.789100 4687 scope.go:117] "RemoveContainer" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.789608 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": container with ID starting with ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc not found: ID does not exist" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.789643 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} err="failed to get container status \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": rpc error: code = NotFound desc = could not find container \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": container with ID starting with ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.789666 4687 scope.go:117] "RemoveContainer" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.790005 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": container with ID starting with 18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789 not found: ID does not exist" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.790041 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} err="failed to get container status \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": rpc error: code = NotFound desc = could not find container \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": container with ID starting with 18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.790064 4687 scope.go:117] "RemoveContainer" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.793700 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": container with ID starting with 5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e not found: ID does not exist" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.793752 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} err="failed to get container status \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": rpc error: code = NotFound desc = could not find container \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": container with ID starting with 5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.793778 4687 scope.go:117] "RemoveContainer" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.794435 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": container with ID starting with 5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c not found: ID does not exist" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.794474 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} err="failed to get container status \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": rpc error: code = NotFound desc = could not find container \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": container with ID starting with 5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.794499 4687 scope.go:117] "RemoveContainer" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.794861 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": container with ID starting with 5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b not found: ID does not exist" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.794895 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} err="failed to get container status \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": rpc error: code = NotFound desc = could not find container \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": container with ID starting with 5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.794920 4687 scope.go:117] "RemoveContainer" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: E1203 17:52:18.795276 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": container with ID starting with c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca not found: ID does not exist" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.795300 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} err="failed to get container status \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": rpc error: code = NotFound desc = could not find container \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": container with ID starting with c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.795319 4687 scope.go:117] "RemoveContainer" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.795714 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} err="failed to get container status \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": rpc error: code = NotFound desc = could not find container \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": container with ID starting with ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.795756 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796276 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} err="failed to get container status \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": rpc error: code = NotFound desc = could not find container \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": container with ID starting with 1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796311 4687 scope.go:117] "RemoveContainer" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796621 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} err="failed to get container status \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": rpc error: code = NotFound desc = could not find container \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": container with ID starting with 18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796648 4687 scope.go:117] "RemoveContainer" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796911 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} err="failed to get container status \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": rpc error: code = NotFound desc = could not find container \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": container with ID starting with d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.796935 4687 scope.go:117] "RemoveContainer" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797258 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} err="failed to get container status \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": rpc error: code = NotFound desc = could not find container \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": container with ID starting with ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797283 4687 scope.go:117] "RemoveContainer" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797575 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} err="failed to get container status \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": rpc error: code = NotFound desc = could not find container \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": container with ID starting with 18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797603 4687 scope.go:117] "RemoveContainer" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797885 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} err="failed to get container status \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": rpc error: code = NotFound desc = could not find container \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": container with ID starting with 5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.797908 4687 scope.go:117] "RemoveContainer" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798114 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} err="failed to get container status \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": rpc error: code = NotFound desc = could not find container \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": container with ID starting with 5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798156 4687 scope.go:117] "RemoveContainer" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798370 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} err="failed to get container status \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": rpc error: code = NotFound desc = could not find container \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": container with ID starting with 5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798395 4687 scope.go:117] "RemoveContainer" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798618 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} err="failed to get container status \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": rpc error: code = NotFound desc = could not find container \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": container with ID starting with c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798645 4687 scope.go:117] "RemoveContainer" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798834 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} err="failed to get container status \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": rpc error: code = NotFound desc = could not find container \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": container with ID starting with ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.798852 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.799535 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} err="failed to get container status \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": rpc error: code = NotFound desc = could not find container \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": container with ID starting with 1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.799564 4687 scope.go:117] "RemoveContainer" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.799862 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} err="failed to get container status \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": rpc error: code = NotFound desc = could not find container \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": container with ID starting with 18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.799890 4687 scope.go:117] "RemoveContainer" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.800151 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} err="failed to get container status \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": rpc error: code = NotFound desc = could not find container \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": container with ID starting with d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.800176 4687 scope.go:117] "RemoveContainer" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.800859 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} err="failed to get container status \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": rpc error: code = NotFound desc = could not find container \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": container with ID starting with ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.800890 4687 scope.go:117] "RemoveContainer" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801167 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} err="failed to get container status \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": rpc error: code = NotFound desc = could not find container \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": container with ID starting with 18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801193 4687 scope.go:117] "RemoveContainer" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801410 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} err="failed to get container status \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": rpc error: code = NotFound desc = could not find container \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": container with ID starting with 5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801433 4687 scope.go:117] "RemoveContainer" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801706 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} err="failed to get container status \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": rpc error: code = NotFound desc = could not find container \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": container with ID starting with 5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.801729 4687 scope.go:117] "RemoveContainer" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.802507 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} err="failed to get container status \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": rpc error: code = NotFound desc = could not find container \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": container with ID starting with 5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.802982 4687 scope.go:117] "RemoveContainer" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.803323 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} err="failed to get container status \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": rpc error: code = NotFound desc = could not find container \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": container with ID starting with c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.803358 4687 scope.go:117] "RemoveContainer" containerID="ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.803675 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f"} err="failed to get container status \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": rpc error: code = NotFound desc = could not find container \"ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f\": container with ID starting with ca93e8a15e180716afc920e5d5eb29dea3cf78f408a21a7238072d6025dcd86f not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.803707 4687 scope.go:117] "RemoveContainer" containerID="1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804024 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2"} err="failed to get container status \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": rpc error: code = NotFound desc = could not find container \"1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2\": container with ID starting with 1efbc43565f9a7ab7f1ed2080a82fb0ff44c25499ef21109fe573a5c3eac56f2 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804051 4687 scope.go:117] "RemoveContainer" containerID="18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804324 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa"} err="failed to get container status \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": rpc error: code = NotFound desc = could not find container \"18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa\": container with ID starting with 18ffcfe1adb10a6c25428c5cf191b3fb5997a358e818fa9ef9b60fd3623ed0fa not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804361 4687 scope.go:117] "RemoveContainer" containerID="d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804730 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68"} err="failed to get container status \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": rpc error: code = NotFound desc = could not find container \"d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68\": container with ID starting with d2da03b6f6e700b67c522d390f41259b5bec938e1e2bae88c0163faaa0dd4f68 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804754 4687 scope.go:117] "RemoveContainer" containerID="ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804928 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc"} err="failed to get container status \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": rpc error: code = NotFound desc = could not find container \"ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc\": container with ID starting with ca7cf332b494ebbc80cdfe8b68882908f263420eec7522476288cbcd7bcefebc not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.804952 4687 scope.go:117] "RemoveContainer" containerID="18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.805115 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789"} err="failed to get container status \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": rpc error: code = NotFound desc = could not find container \"18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789\": container with ID starting with 18101fcd7dd2e3a3e7d76c8674c8443ec7a8c4f06366b0ea3976c8af6d49b789 not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.805148 4687 scope.go:117] "RemoveContainer" containerID="5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806055 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e"} err="failed to get container status \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": rpc error: code = NotFound desc = could not find container \"5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e\": container with ID starting with 5b582daff8753d23b1dcddacddf260039ca0943e956e60deefbf37acf3a0df9e not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806084 4687 scope.go:117] "RemoveContainer" containerID="5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806335 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c"} err="failed to get container status \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": rpc error: code = NotFound desc = could not find container \"5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c\": container with ID starting with 5cd1f0d0d30679ac0a17ed3c9439d4b06356b9b532f8afbc7903e4bbf7be917c not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806360 4687 scope.go:117] "RemoveContainer" containerID="5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806553 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b"} err="failed to get container status \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": rpc error: code = NotFound desc = could not find container \"5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b\": container with ID starting with 5ae1a59b57986f73d086bd8052e6a8361bdbf5580ef027415d80d082a305679b not found: ID does not exist" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806588 4687 scope.go:117] "RemoveContainer" containerID="c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca" Dec 03 17:52:18 crc kubenswrapper[4687]: I1203 17:52:18.806828 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca"} err="failed to get container status \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": rpc error: code = NotFound desc = could not find container \"c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca\": container with ID starting with c1c6555a4ffeb87a12291260af8a65d4d33b0948ddda7768066df2d7149369ca not found: ID does not exist" Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.420056 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7fe22da-1ea3-49ba-b2c6-851ff064db76" path="/var/lib/kubelet/pods/f7fe22da-1ea3-49ba-b2c6-851ff064db76/volumes" Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.528529 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kbjvs_ede1a722-2df8-433e-b8be-82c434be7d02/kube-multus/2.log" Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.528646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kbjvs" event={"ID":"ede1a722-2df8-433e-b8be-82c434be7d02","Type":"ContainerStarted","Data":"e770bbd6c17e041013b0707cce3ab106b1affb2c2b46de3e5834e356670d92bd"} Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.530310 4687 generic.go:334] "Generic (PLEG): container finished" podID="fde134bd-9ad7-4039-a9d5-33abf54eba24" containerID="46750ddccfee1124434d46e0c7dbe126fb68cf2e90f1cddd29398f616be23005" exitCode=0 Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.530337 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerDied","Data":"46750ddccfee1124434d46e0c7dbe126fb68cf2e90f1cddd29398f616be23005"} Dec 03 17:52:19 crc kubenswrapper[4687]: I1203 17:52:19.530356 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"d74bf882d78f2b314865b9976b841820223d39967d89194ecaf6ba09d3608fdb"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543034 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"5f980e6684e094acd694490e103dfa598c78b37e3037b966259e25e6dbda0035"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"7a938333e2e00281ca5fd96e0351f936b1b03384886f384197e5257e858e14fa"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"98f7e26fe5fe92e59d6677ca10fb3e78d463bb23e0b2f1ded3015517117e5ed2"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543479 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"6777f4e5a5cfa1d77928b528d90e7321762a2c8ae4387cc988bbea07f5a0676b"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543490 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"0310632a20bd13359e8ecbc1afee3f291189124773ef1d22e5fc3647532a84c2"} Dec 03 17:52:20 crc kubenswrapper[4687]: I1203 17:52:20.543501 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"fd2252e998abf3b2c1a34046923d12ee2fc7af0b7175388f7c2ca1d586cb630a"} Dec 03 17:52:22 crc kubenswrapper[4687]: I1203 17:52:22.558451 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"1fb17b12310f5b5fae9b2a31a489731c520200eb0cc044baa7a2fbc5ac8279fe"} Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.586208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" event={"ID":"fde134bd-9ad7-4039-a9d5-33abf54eba24","Type":"ContainerStarted","Data":"92576b992597331317cd51e85a2781f5e879b091f67b97fea53c30d845af46e1"} Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.588115 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.588179 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.588192 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.615578 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" podStartSLOduration=7.615557468 podStartE2EDuration="7.615557468s" podCreationTimestamp="2025-12-03 17:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:52:25.614919542 +0000 UTC m=+778.505614995" watchObservedRunningTime="2025-12-03 17:52:25.615557468 +0000 UTC m=+778.506252901" Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.622775 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:25 crc kubenswrapper[4687]: I1203 17:52:25.632399 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:48 crc kubenswrapper[4687]: I1203 17:52:48.694090 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdzk" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.754417 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb"] Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.756154 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.758094 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.764730 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb"] Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.896618 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.896675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8h4\" (UniqueName: \"kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.896758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.997558 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8h4\" (UniqueName: \"kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.997638 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.997720 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.998250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:57 crc kubenswrapper[4687]: I1203 17:52:57.998473 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.015900 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8h4\" (UniqueName: \"kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.072195 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.238101 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb"] Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.785241 4687 generic.go:334] "Generic (PLEG): container finished" podID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerID="a40a6b34d828ec490d1c2671d2457367022323cec964731089e5f36e55f2932e" exitCode=0 Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.785437 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" event={"ID":"0de6d07d-8385-44ce-a57a-7950e1c8da08","Type":"ContainerDied","Data":"a40a6b34d828ec490d1c2671d2457367022323cec964731089e5f36e55f2932e"} Dec 03 17:52:58 crc kubenswrapper[4687]: I1203 17:52:58.785650 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" event={"ID":"0de6d07d-8385-44ce-a57a-7950e1c8da08","Type":"ContainerStarted","Data":"00514e2d04e92d047eae6b35a62585972188171cd75891798e181c58d2eda155"} Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.071573 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.075439 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.079914 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.225936 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkq7h\" (UniqueName: \"kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.226491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.226596 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.327918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.327971 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.328022 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkq7h\" (UniqueName: \"kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.328479 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.328499 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.353619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkq7h\" (UniqueName: \"kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h\") pod \"redhat-operators-dld2w\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.395147 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.593823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:00 crc kubenswrapper[4687]: W1203 17:53:00.603170 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e27355d_e8b9_40e2_aae6_19db4dd583dd.slice/crio-52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b WatchSource:0}: Error finding container 52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b: Status 404 returned error can't find the container with id 52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.797293 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerID="d2baac316fa55451abd58f638cb9814300a13e16dde014174b5da373d3ed5132" exitCode=0 Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.797353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerDied","Data":"d2baac316fa55451abd58f638cb9814300a13e16dde014174b5da373d3ed5132"} Dec 03 17:53:00 crc kubenswrapper[4687]: I1203 17:53:00.797397 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerStarted","Data":"52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b"} Dec 03 17:53:01 crc kubenswrapper[4687]: I1203 17:53:01.803979 4687 generic.go:334] "Generic (PLEG): container finished" podID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerID="5f1aea87092109b4ed812ffbc225c1b49c1a445f3574cf885e8600013176eb6d" exitCode=0 Dec 03 17:53:01 crc kubenswrapper[4687]: I1203 17:53:01.804039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" event={"ID":"0de6d07d-8385-44ce-a57a-7950e1c8da08","Type":"ContainerDied","Data":"5f1aea87092109b4ed812ffbc225c1b49c1a445f3574cf885e8600013176eb6d"} Dec 03 17:53:01 crc kubenswrapper[4687]: I1203 17:53:01.807978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerStarted","Data":"67d4225f9d0ccc781b09438055a867bc8c4adab0558ed87596ce607b4d07e026"} Dec 03 17:53:02 crc kubenswrapper[4687]: I1203 17:53:02.815779 4687 generic.go:334] "Generic (PLEG): container finished" podID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerID="fdb2379e01a5f56cc2b0ec7f8aca4d3984c47db550613d2052cc82c2d52eed11" exitCode=0 Dec 03 17:53:02 crc kubenswrapper[4687]: I1203 17:53:02.815860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" event={"ID":"0de6d07d-8385-44ce-a57a-7950e1c8da08","Type":"ContainerDied","Data":"fdb2379e01a5f56cc2b0ec7f8aca4d3984c47db550613d2052cc82c2d52eed11"} Dec 03 17:53:02 crc kubenswrapper[4687]: I1203 17:53:02.818078 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerID="67d4225f9d0ccc781b09438055a867bc8c4adab0558ed87596ce607b4d07e026" exitCode=0 Dec 03 17:53:02 crc kubenswrapper[4687]: I1203 17:53:02.818108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerDied","Data":"67d4225f9d0ccc781b09438055a867bc8c4adab0558ed87596ce607b4d07e026"} Dec 03 17:53:03 crc kubenswrapper[4687]: I1203 17:53:03.825050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerStarted","Data":"f25ee889695438f95bd0958bb5c5ffd30cf7cde3ad8173c96ac248f9b372b769"} Dec 03 17:53:03 crc kubenswrapper[4687]: I1203 17:53:03.844218 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dld2w" podStartSLOduration=1.523511907 podStartE2EDuration="3.844198215s" podCreationTimestamp="2025-12-03 17:53:00 +0000 UTC" firstStartedPulling="2025-12-03 17:53:00.907492721 +0000 UTC m=+813.798188154" lastFinishedPulling="2025-12-03 17:53:03.228179029 +0000 UTC m=+816.118874462" observedRunningTime="2025-12-03 17:53:03.841934744 +0000 UTC m=+816.732630177" watchObservedRunningTime="2025-12-03 17:53:03.844198215 +0000 UTC m=+816.734893658" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.063698 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.176873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util\") pod \"0de6d07d-8385-44ce-a57a-7950e1c8da08\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.176983 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle\") pod \"0de6d07d-8385-44ce-a57a-7950e1c8da08\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.177002 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p8h4\" (UniqueName: \"kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4\") pod \"0de6d07d-8385-44ce-a57a-7950e1c8da08\" (UID: \"0de6d07d-8385-44ce-a57a-7950e1c8da08\") " Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.181402 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle" (OuterVolumeSpecName: "bundle") pod "0de6d07d-8385-44ce-a57a-7950e1c8da08" (UID: "0de6d07d-8385-44ce-a57a-7950e1c8da08"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.184633 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4" (OuterVolumeSpecName: "kube-api-access-2p8h4") pod "0de6d07d-8385-44ce-a57a-7950e1c8da08" (UID: "0de6d07d-8385-44ce-a57a-7950e1c8da08"). InnerVolumeSpecName "kube-api-access-2p8h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.189493 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util" (OuterVolumeSpecName: "util") pod "0de6d07d-8385-44ce-a57a-7950e1c8da08" (UID: "0de6d07d-8385-44ce-a57a-7950e1c8da08"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.278167 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-util\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.278220 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0de6d07d-8385-44ce-a57a-7950e1c8da08-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.278231 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p8h4\" (UniqueName: \"kubernetes.io/projected/0de6d07d-8385-44ce-a57a-7950e1c8da08-kube-api-access-2p8h4\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.837244 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.838310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb" event={"ID":"0de6d07d-8385-44ce-a57a-7950e1c8da08","Type":"ContainerDied","Data":"00514e2d04e92d047eae6b35a62585972188171cd75891798e181c58d2eda155"} Dec 03 17:53:04 crc kubenswrapper[4687]: I1203 17:53:04.838348 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00514e2d04e92d047eae6b35a62585972188171cd75891798e181c58d2eda155" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.786503 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx"] Dec 03 17:53:07 crc kubenswrapper[4687]: E1203 17:53:07.786706 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="pull" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.786716 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="pull" Dec 03 17:53:07 crc kubenswrapper[4687]: E1203 17:53:07.786725 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="extract" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.786730 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="extract" Dec 03 17:53:07 crc kubenswrapper[4687]: E1203 17:53:07.786745 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="util" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.786751 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="util" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.786862 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de6d07d-8385-44ce-a57a-7950e1c8da08" containerName="extract" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.787396 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.789610 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2zbb7" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.790004 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.790018 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.800588 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx"] Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.826164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9wjc\" (UniqueName: \"kubernetes.io/projected/d803fc3b-cbaa-4241-870a-7c89982621dd-kube-api-access-t9wjc\") pod \"nmstate-operator-5b5b58f5c8-l4nkx\" (UID: \"d803fc3b-cbaa-4241-870a-7c89982621dd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.927751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9wjc\" (UniqueName: \"kubernetes.io/projected/d803fc3b-cbaa-4241-870a-7c89982621dd-kube-api-access-t9wjc\") pod \"nmstate-operator-5b5b58f5c8-l4nkx\" (UID: \"d803fc3b-cbaa-4241-870a-7c89982621dd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" Dec 03 17:53:07 crc kubenswrapper[4687]: I1203 17:53:07.948445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9wjc\" (UniqueName: \"kubernetes.io/projected/d803fc3b-cbaa-4241-870a-7c89982621dd-kube-api-access-t9wjc\") pod \"nmstate-operator-5b5b58f5c8-l4nkx\" (UID: \"d803fc3b-cbaa-4241-870a-7c89982621dd\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" Dec 03 17:53:08 crc kubenswrapper[4687]: I1203 17:53:08.102431 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" Dec 03 17:53:08 crc kubenswrapper[4687]: I1203 17:53:08.373350 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx"] Dec 03 17:53:08 crc kubenswrapper[4687]: I1203 17:53:08.863800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" event={"ID":"d803fc3b-cbaa-4241-870a-7c89982621dd","Type":"ContainerStarted","Data":"c97a2841be1ec76454aa33a8d832b6c8aab2008dd1385be0dc500c22464c2722"} Dec 03 17:53:10 crc kubenswrapper[4687]: I1203 17:53:10.395491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:10 crc kubenswrapper[4687]: I1203 17:53:10.395571 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:10 crc kubenswrapper[4687]: I1203 17:53:10.457817 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:10 crc kubenswrapper[4687]: I1203 17:53:10.910508 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:12 crc kubenswrapper[4687]: I1203 17:53:12.455707 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:12 crc kubenswrapper[4687]: I1203 17:53:12.891525 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dld2w" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="registry-server" containerID="cri-o://f25ee889695438f95bd0958bb5c5ffd30cf7cde3ad8173c96ac248f9b372b769" gracePeriod=2 Dec 03 17:53:15 crc kubenswrapper[4687]: I1203 17:53:15.908730 4687 generic.go:334] "Generic (PLEG): container finished" podID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerID="f25ee889695438f95bd0958bb5c5ffd30cf7cde3ad8173c96ac248f9b372b769" exitCode=0 Dec 03 17:53:15 crc kubenswrapper[4687]: I1203 17:53:15.908902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerDied","Data":"f25ee889695438f95bd0958bb5c5ffd30cf7cde3ad8173c96ac248f9b372b769"} Dec 03 17:53:15 crc kubenswrapper[4687]: I1203 17:53:15.909132 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dld2w" event={"ID":"7e27355d-e8b9-40e2-aae6-19db4dd583dd","Type":"ContainerDied","Data":"52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b"} Dec 03 17:53:15 crc kubenswrapper[4687]: I1203 17:53:15.909147 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52628fa670e5e562308fe42bdfb4b55af7ff15134da065124cc219e951f0736b" Dec 03 17:53:15 crc kubenswrapper[4687]: I1203 17:53:15.909664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.019809 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities\") pod \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.019857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content\") pod \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.019917 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkq7h\" (UniqueName: \"kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h\") pod \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\" (UID: \"7e27355d-e8b9-40e2-aae6-19db4dd583dd\") " Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.021096 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities" (OuterVolumeSpecName: "utilities") pod "7e27355d-e8b9-40e2-aae6-19db4dd583dd" (UID: "7e27355d-e8b9-40e2-aae6-19db4dd583dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.025295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h" (OuterVolumeSpecName: "kube-api-access-wkq7h") pod "7e27355d-e8b9-40e2-aae6-19db4dd583dd" (UID: "7e27355d-e8b9-40e2-aae6-19db4dd583dd"). InnerVolumeSpecName "kube-api-access-wkq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.121588 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkq7h\" (UniqueName: \"kubernetes.io/projected/7e27355d-e8b9-40e2-aae6-19db4dd583dd-kube-api-access-wkq7h\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.121638 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.125446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e27355d-e8b9-40e2-aae6-19db4dd583dd" (UID: "7e27355d-e8b9-40e2-aae6-19db4dd583dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.223260 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e27355d-e8b9-40e2-aae6-19db4dd583dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.914352 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dld2w" Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.943171 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:16 crc kubenswrapper[4687]: I1203 17:53:16.945888 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dld2w"] Dec 03 17:53:17 crc kubenswrapper[4687]: I1203 17:53:17.414971 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" path="/var/lib/kubelet/pods/7e27355d-e8b9-40e2-aae6-19db4dd583dd/volumes" Dec 03 17:53:23 crc kubenswrapper[4687]: I1203 17:53:23.956575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" event={"ID":"d803fc3b-cbaa-4241-870a-7c89982621dd","Type":"ContainerStarted","Data":"83da50548dc1f1282a889e396cb8aebb907b6b282e6d77e7fe10e0ac1186e211"} Dec 03 17:53:23 crc kubenswrapper[4687]: I1203 17:53:23.982163 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-l4nkx" podStartSLOduration=1.946787362 podStartE2EDuration="16.982148367s" podCreationTimestamp="2025-12-03 17:53:07 +0000 UTC" firstStartedPulling="2025-12-03 17:53:08.380266232 +0000 UTC m=+821.270961665" lastFinishedPulling="2025-12-03 17:53:23.415627237 +0000 UTC m=+836.306322670" observedRunningTime="2025-12-03 17:53:23.978299173 +0000 UTC m=+836.868994606" watchObservedRunningTime="2025-12-03 17:53:23.982148367 +0000 UTC m=+836.872843800" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.891725 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd"] Dec 03 17:53:24 crc kubenswrapper[4687]: E1203 17:53:24.892181 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="extract-content" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.892194 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="extract-content" Dec 03 17:53:24 crc kubenswrapper[4687]: E1203 17:53:24.892214 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="extract-utilities" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.892222 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="extract-utilities" Dec 03 17:53:24 crc kubenswrapper[4687]: E1203 17:53:24.892241 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="registry-server" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.892249 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="registry-server" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.892516 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e27355d-e8b9-40e2-aae6-19db4dd583dd" containerName="registry-server" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.899333 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.904224 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xwk2p" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.909375 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6"] Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.910224 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.911705 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.920454 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd"] Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.926419 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-p2m72"] Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.927071 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931862 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgtfv\" (UniqueName: \"kubernetes.io/projected/9623c042-2813-4192-a1fc-a92a58364fce-kube-api-access-lgtfv\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931887 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-nmstate-lock\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gqp\" (UniqueName: \"kubernetes.io/projected/b8bf00a4-e266-4c05-bfc5-4121c96f0368-kube-api-access-v6gqp\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931941 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-ovs-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931955 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-dbus-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.931970 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vgx\" (UniqueName: \"kubernetes.io/projected/34214bad-1472-4611-9876-d7765279821c-kube-api-access-g7vgx\") pod \"nmstate-metrics-7f946cbc9-ncxtd\" (UID: \"34214bad-1472-4611-9876-d7765279821c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" Dec 03 17:53:24 crc kubenswrapper[4687]: I1203 17:53:24.938964 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.019980 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.021401 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.023406 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8hflr" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.023772 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.023996 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.027702 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-nmstate-lock\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r666\" (UniqueName: \"kubernetes.io/projected/b1bd9d52-1f74-4001-a2ff-c3a84666c686-kube-api-access-6r666\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gqp\" (UniqueName: \"kubernetes.io/projected/b8bf00a4-e266-4c05-bfc5-4121c96f0368-kube-api-access-v6gqp\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032582 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-ovs-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-dbus-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vgx\" (UniqueName: \"kubernetes.io/projected/34214bad-1472-4611-9876-d7765279821c-kube-api-access-g7vgx\") pod \"nmstate-metrics-7f946cbc9-ncxtd\" (UID: \"34214bad-1472-4611-9876-d7765279821c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1bd9d52-1f74-4001-a2ff-c3a84666c686-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032758 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.032785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgtfv\" (UniqueName: \"kubernetes.io/projected/9623c042-2813-4192-a1fc-a92a58364fce-kube-api-access-lgtfv\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.033000 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-ovs-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.033072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-nmstate-lock\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.033360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/9623c042-2813-4192-a1fc-a92a58364fce-dbus-socket\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: E1203 17:53:25.033593 4687 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 03 17:53:25 crc kubenswrapper[4687]: E1203 17:53:25.034642 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair podName:b8bf00a4-e266-4c05-bfc5-4121c96f0368 nodeName:}" failed. No retries permitted until 2025-12-03 17:53:25.534622779 +0000 UTC m=+838.425318292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-blkh6" (UID: "b8bf00a4-e266-4c05-bfc5-4121c96f0368") : secret "openshift-nmstate-webhook" not found Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.058422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vgx\" (UniqueName: \"kubernetes.io/projected/34214bad-1472-4611-9876-d7765279821c-kube-api-access-g7vgx\") pod \"nmstate-metrics-7f946cbc9-ncxtd\" (UID: \"34214bad-1472-4611-9876-d7765279821c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.059813 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gqp\" (UniqueName: \"kubernetes.io/projected/b8bf00a4-e266-4c05-bfc5-4121c96f0368-kube-api-access-v6gqp\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.074313 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgtfv\" (UniqueName: \"kubernetes.io/projected/9623c042-2813-4192-a1fc-a92a58364fce-kube-api-access-lgtfv\") pod \"nmstate-handler-p2m72\" (UID: \"9623c042-2813-4192-a1fc-a92a58364fce\") " pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.134076 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.134147 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r666\" (UniqueName: \"kubernetes.io/projected/b1bd9d52-1f74-4001-a2ff-c3a84666c686-kube-api-access-6r666\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.134176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1bd9d52-1f74-4001-a2ff-c3a84666c686-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.135005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1bd9d52-1f74-4001-a2ff-c3a84666c686-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: E1203 17:53:25.135085 4687 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 03 17:53:25 crc kubenswrapper[4687]: E1203 17:53:25.135143 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert podName:b1bd9d52-1f74-4001-a2ff-c3a84666c686 nodeName:}" failed. No retries permitted until 2025-12-03 17:53:25.635111168 +0000 UTC m=+838.525806601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-dkrpg" (UID: "b1bd9d52-1f74-4001-a2ff-c3a84666c686") : secret "plugin-serving-cert" not found Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.160483 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r666\" (UniqueName: \"kubernetes.io/projected/b1bd9d52-1f74-4001-a2ff-c3a84666c686-kube-api-access-6r666\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.214686 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69866dbfb5-t62jl"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.215654 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.226065 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.226438 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69866dbfb5-t62jl"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.234916 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.234951 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-oauth-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.234972 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-oauth-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.235009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-service-ca\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.235040 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-trusted-ca-bundle\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.235110 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgkpz\" (UniqueName: \"kubernetes.io/projected/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-kube-api-access-dgkpz\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.235220 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.245653 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:25 crc kubenswrapper[4687]: W1203 17:53:25.282359 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9623c042_2813_4192_a1fc_a92a58364fce.slice/crio-b4d267a4428be2df34fd8ed3552e9596e3cc36a4f799370b106c26fc13a18d16 WatchSource:0}: Error finding container b4d267a4428be2df34fd8ed3552e9596e3cc36a4f799370b106c26fc13a18d16: Status 404 returned error can't find the container with id b4d267a4428be2df34fd8ed3552e9596e3cc36a4f799370b106c26fc13a18d16 Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.336307 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.336353 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-oauth-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.336375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-oauth-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.336404 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-service-ca\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.337679 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-oauth-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.337776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-trusted-ca-bundle\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.337755 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-service-ca\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.337885 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgkpz\" (UniqueName: \"kubernetes.io/projected/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-kube-api-access-dgkpz\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.337917 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.338492 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.339241 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-trusted-ca-bundle\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.341261 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-serving-cert\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.341943 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-console-oauth-config\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.355747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgkpz\" (UniqueName: \"kubernetes.io/projected/4b5332de-8e03-4c0e-8bdf-60a98cd945ae-kube-api-access-dgkpz\") pod \"console-69866dbfb5-t62jl\" (UID: \"4b5332de-8e03-4c0e-8bdf-60a98cd945ae\") " pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.433065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd"] Dec 03 17:53:25 crc kubenswrapper[4687]: W1203 17:53:25.438037 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34214bad_1472_4611_9876_d7765279821c.slice/crio-c1654fd429acef01b325b183209e9a0b5ba176bf7c8a3100710fdc022f0c2dce WatchSource:0}: Error finding container c1654fd429acef01b325b183209e9a0b5ba176bf7c8a3100710fdc022f0c2dce: Status 404 returned error can't find the container with id c1654fd429acef01b325b183209e9a0b5ba176bf7c8a3100710fdc022f0c2dce Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.532554 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.539850 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.544333 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b8bf00a4-e266-4c05-bfc5-4121c96f0368-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-blkh6\" (UID: \"b8bf00a4-e266-4c05-bfc5-4121c96f0368\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.640589 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.643853 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1bd9d52-1f74-4001-a2ff-c3a84666c686-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-dkrpg\" (UID: \"b1bd9d52-1f74-4001-a2ff-c3a84666c686\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.751274 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69866dbfb5-t62jl"] Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.833673 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.938482 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.976482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" event={"ID":"34214bad-1472-4611-9876-d7765279821c","Type":"ContainerStarted","Data":"c1654fd429acef01b325b183209e9a0b5ba176bf7c8a3100710fdc022f0c2dce"} Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.978203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p2m72" event={"ID":"9623c042-2813-4192-a1fc-a92a58364fce","Type":"ContainerStarted","Data":"b4d267a4428be2df34fd8ed3552e9596e3cc36a4f799370b106c26fc13a18d16"} Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.987334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69866dbfb5-t62jl" event={"ID":"4b5332de-8e03-4c0e-8bdf-60a98cd945ae","Type":"ContainerStarted","Data":"6b875220975c083add328108118f01a810dc00b91f9637589fcc1c1bddc5e843"} Dec 03 17:53:25 crc kubenswrapper[4687]: I1203 17:53:25.987383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69866dbfb5-t62jl" event={"ID":"4b5332de-8e03-4c0e-8bdf-60a98cd945ae","Type":"ContainerStarted","Data":"3f7804786b3dc03e5d62202b1094a6760a11251a8e2fd4943e1fa956183324ea"} Dec 03 17:53:26 crc kubenswrapper[4687]: I1203 17:53:26.024031 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6"] Dec 03 17:53:26 crc kubenswrapper[4687]: I1203 17:53:26.024184 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69866dbfb5-t62jl" podStartSLOduration=1.024162734 podStartE2EDuration="1.024162734s" podCreationTimestamp="2025-12-03 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:53:26.01844573 +0000 UTC m=+838.909141173" watchObservedRunningTime="2025-12-03 17:53:26.024162734 +0000 UTC m=+838.914858177" Dec 03 17:53:26 crc kubenswrapper[4687]: I1203 17:53:26.371818 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg"] Dec 03 17:53:26 crc kubenswrapper[4687]: W1203 17:53:26.385179 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1bd9d52_1f74_4001_a2ff_c3a84666c686.slice/crio-878f4491017ae8bf2a01b5226752db5954050531b4c71b995632d237c5f2bdd6 WatchSource:0}: Error finding container 878f4491017ae8bf2a01b5226752db5954050531b4c71b995632d237c5f2bdd6: Status 404 returned error can't find the container with id 878f4491017ae8bf2a01b5226752db5954050531b4c71b995632d237c5f2bdd6 Dec 03 17:53:26 crc kubenswrapper[4687]: I1203 17:53:26.993600 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" event={"ID":"b1bd9d52-1f74-4001-a2ff-c3a84666c686","Type":"ContainerStarted","Data":"878f4491017ae8bf2a01b5226752db5954050531b4c71b995632d237c5f2bdd6"} Dec 03 17:53:26 crc kubenswrapper[4687]: I1203 17:53:26.995110 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" event={"ID":"b8bf00a4-e266-4c05-bfc5-4121c96f0368","Type":"ContainerStarted","Data":"4c492c2d86e5882b92d84297dc38a514cd120261410eed7c440c3dcdcd9fd044"} Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.010890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" event={"ID":"b1bd9d52-1f74-4001-a2ff-c3a84666c686","Type":"ContainerStarted","Data":"02b01ca4e8fee04bc86d68d7ab2047b9038548f66e4d98d8fba653e86f880a86"} Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.012657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" event={"ID":"b8bf00a4-e266-4c05-bfc5-4121c96f0368","Type":"ContainerStarted","Data":"80b6ac3f38eb1d8087099133cc3d47b762c2ea905c84d7742b75530bb0718e95"} Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.012727 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.016909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-p2m72" event={"ID":"9623c042-2813-4192-a1fc-a92a58364fce","Type":"ContainerStarted","Data":"b6bb7a7a874928a3d1b383be29b661bf5654f051941673bae33fcb7e5ec6c9e6"} Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.016992 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.019792 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" event={"ID":"34214bad-1472-4611-9876-d7765279821c","Type":"ContainerStarted","Data":"eb50d30c6f6748030f992acdd31aa3611dce141002d7fb72913bbdbd8daf202e"} Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.027990 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-dkrpg" podStartSLOduration=1.6013148099999999 podStartE2EDuration="4.027967866s" podCreationTimestamp="2025-12-03 17:53:25 +0000 UTC" firstStartedPulling="2025-12-03 17:53:26.387822346 +0000 UTC m=+839.278517789" lastFinishedPulling="2025-12-03 17:53:28.814475412 +0000 UTC m=+841.705170845" observedRunningTime="2025-12-03 17:53:29.026587669 +0000 UTC m=+841.917283102" watchObservedRunningTime="2025-12-03 17:53:29.027967866 +0000 UTC m=+841.918663299" Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.048432 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" podStartSLOduration=3.140343332 podStartE2EDuration="5.048413728s" podCreationTimestamp="2025-12-03 17:53:24 +0000 UTC" firstStartedPulling="2025-12-03 17:53:26.053328579 +0000 UTC m=+838.944024012" lastFinishedPulling="2025-12-03 17:53:27.961398975 +0000 UTC m=+840.852094408" observedRunningTime="2025-12-03 17:53:29.045370475 +0000 UTC m=+841.936065908" watchObservedRunningTime="2025-12-03 17:53:29.048413728 +0000 UTC m=+841.939109161" Dec 03 17:53:29 crc kubenswrapper[4687]: I1203 17:53:29.064304 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-p2m72" podStartSLOduration=2.466185339 podStartE2EDuration="5.064284165s" podCreationTimestamp="2025-12-03 17:53:24 +0000 UTC" firstStartedPulling="2025-12-03 17:53:25.284218117 +0000 UTC m=+838.174913550" lastFinishedPulling="2025-12-03 17:53:27.882316943 +0000 UTC m=+840.773012376" observedRunningTime="2025-12-03 17:53:29.063293858 +0000 UTC m=+841.953989291" watchObservedRunningTime="2025-12-03 17:53:29.064284165 +0000 UTC m=+841.954979598" Dec 03 17:53:31 crc kubenswrapper[4687]: I1203 17:53:31.030502 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" event={"ID":"34214bad-1472-4611-9876-d7765279821c","Type":"ContainerStarted","Data":"a8a3a9a494c4bc133535a671914125eb1278f3790ab36e38707f9ade90b14e06"} Dec 03 17:53:35 crc kubenswrapper[4687]: I1203 17:53:35.271414 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-p2m72" Dec 03 17:53:35 crc kubenswrapper[4687]: I1203 17:53:35.289426 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-ncxtd" podStartSLOduration=6.298640508 podStartE2EDuration="11.289409732s" podCreationTimestamp="2025-12-03 17:53:24 +0000 UTC" firstStartedPulling="2025-12-03 17:53:25.440387187 +0000 UTC m=+838.331082620" lastFinishedPulling="2025-12-03 17:53:30.431156391 +0000 UTC m=+843.321851844" observedRunningTime="2025-12-03 17:53:31.048192915 +0000 UTC m=+843.938888358" watchObservedRunningTime="2025-12-03 17:53:35.289409732 +0000 UTC m=+848.180105165" Dec 03 17:53:35 crc kubenswrapper[4687]: I1203 17:53:35.533615 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:35 crc kubenswrapper[4687]: I1203 17:53:35.534048 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:35 crc kubenswrapper[4687]: I1203 17:53:35.540310 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:36 crc kubenswrapper[4687]: I1203 17:53:36.065349 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69866dbfb5-t62jl" Dec 03 17:53:36 crc kubenswrapper[4687]: I1203 17:53:36.109695 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:53:45 crc kubenswrapper[4687]: I1203 17:53:45.841963 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-blkh6" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.477783 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58"] Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.479584 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.482099 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.494741 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58"] Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.593233 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.593331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.593392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccpcb\" (UniqueName: \"kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.694378 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.694739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.694770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccpcb\" (UniqueName: \"kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.694861 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.695199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.714553 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccpcb\" (UniqueName: \"kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:58 crc kubenswrapper[4687]: I1203 17:53:58.796215 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:53:59 crc kubenswrapper[4687]: I1203 17:53:59.201771 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58"] Dec 03 17:54:00 crc kubenswrapper[4687]: I1203 17:54:00.208898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerStarted","Data":"f756e5755c01f7cef7e5457db0685d17bbbba893e98f056fe87cec989bd29ad2"} Dec 03 17:54:00 crc kubenswrapper[4687]: I1203 17:54:00.209299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerStarted","Data":"089cd0c1dc9802479f8691bbe449da918b16a2755103f3b507e2eb3d03a0a886"} Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.148011 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-mkvps" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" containerID="cri-o://e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc" gracePeriod=15 Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.217221 4687 generic.go:334] "Generic (PLEG): container finished" podID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerID="f756e5755c01f7cef7e5457db0685d17bbbba893e98f056fe87cec989bd29ad2" exitCode=0 Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.217260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerDied","Data":"f756e5755c01f7cef7e5457db0685d17bbbba893e98f056fe87cec989bd29ad2"} Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.503918 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mkvps_1c55e5e2-5437-468e-9410-605afa2612d9/console/0.log" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.504260 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.647775 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.647853 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.647896 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.647916 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.647974 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.648013 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.648046 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvspw\" (UniqueName: \"kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw\") pod \"1c55e5e2-5437-468e-9410-605afa2612d9\" (UID: \"1c55e5e2-5437-468e-9410-605afa2612d9\") " Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.648791 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.648642 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca" (OuterVolumeSpecName: "service-ca") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.649012 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config" (OuterVolumeSpecName: "console-config") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.649997 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.654719 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw" (OuterVolumeSpecName: "kube-api-access-rvspw") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "kube-api-access-rvspw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.655579 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.660772 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1c55e5e2-5437-468e-9410-605afa2612d9" (UID: "1c55e5e2-5437-468e-9410-605afa2612d9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749814 4687 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749858 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvspw\" (UniqueName: \"kubernetes.io/projected/1c55e5e2-5437-468e-9410-605afa2612d9-kube-api-access-rvspw\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749872 4687 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-service-ca\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749883 4687 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749894 4687 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1c55e5e2-5437-468e-9410-605afa2612d9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749902 4687 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-console-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:01 crc kubenswrapper[4687]: I1203 17:54:01.749911 4687 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c55e5e2-5437-468e-9410-605afa2612d9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.239771 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-mkvps_1c55e5e2-5437-468e-9410-605afa2612d9/console/0.log" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.239817 4687 generic.go:334] "Generic (PLEG): container finished" podID="1c55e5e2-5437-468e-9410-605afa2612d9" containerID="e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc" exitCode=2 Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.239842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mkvps" event={"ID":"1c55e5e2-5437-468e-9410-605afa2612d9","Type":"ContainerDied","Data":"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc"} Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.239867 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mkvps" event={"ID":"1c55e5e2-5437-468e-9410-605afa2612d9","Type":"ContainerDied","Data":"4aa0e299c7beecfd0e34299d3a8654324887bf2e0ae08706fa7d8143659c0607"} Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.239883 4687 scope.go:117] "RemoveContainer" containerID="e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.240016 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mkvps" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.257252 4687 scope.go:117] "RemoveContainer" containerID="e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc" Dec 03 17:54:02 crc kubenswrapper[4687]: E1203 17:54:02.257654 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc\": container with ID starting with e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc not found: ID does not exist" containerID="e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.257695 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc"} err="failed to get container status \"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc\": rpc error: code = NotFound desc = could not find container \"e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc\": container with ID starting with e6b18ee973eb8541fb14bfdf192e245c6bc298090c570dfd57bb1e046381a9bc not found: ID does not exist" Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.281861 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:54:02 crc kubenswrapper[4687]: I1203 17:54:02.287025 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-mkvps"] Dec 03 17:54:03 crc kubenswrapper[4687]: I1203 17:54:03.420655 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" path="/var/lib/kubelet/pods/1c55e5e2-5437-468e-9410-605afa2612d9/volumes" Dec 03 17:54:08 crc kubenswrapper[4687]: I1203 17:54:08.278970 4687 generic.go:334] "Generic (PLEG): container finished" podID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerID="3d02915c6fe574881252b2ad485a3feb6afe7d9ff414c8d14f1e91850ecac351" exitCode=0 Dec 03 17:54:08 crc kubenswrapper[4687]: I1203 17:54:08.279044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerDied","Data":"3d02915c6fe574881252b2ad485a3feb6afe7d9ff414c8d14f1e91850ecac351"} Dec 03 17:54:09 crc kubenswrapper[4687]: I1203 17:54:09.287530 4687 generic.go:334] "Generic (PLEG): container finished" podID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerID="33a25805e30e919197f4becb6b96802cfc2ab677b8c7a3084f183595aef9644f" exitCode=0 Dec 03 17:54:09 crc kubenswrapper[4687]: I1203 17:54:09.287598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerDied","Data":"33a25805e30e919197f4becb6b96802cfc2ab677b8c7a3084f183595aef9644f"} Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.498149 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.669538 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccpcb\" (UniqueName: \"kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb\") pod \"c98b03c2-e740-402d-b2f8-d8ab27224b94\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.669615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util\") pod \"c98b03c2-e740-402d-b2f8-d8ab27224b94\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.669756 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle\") pod \"c98b03c2-e740-402d-b2f8-d8ab27224b94\" (UID: \"c98b03c2-e740-402d-b2f8-d8ab27224b94\") " Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.670892 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle" (OuterVolumeSpecName: "bundle") pod "c98b03c2-e740-402d-b2f8-d8ab27224b94" (UID: "c98b03c2-e740-402d-b2f8-d8ab27224b94"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.674488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb" (OuterVolumeSpecName: "kube-api-access-ccpcb") pod "c98b03c2-e740-402d-b2f8-d8ab27224b94" (UID: "c98b03c2-e740-402d-b2f8-d8ab27224b94"). InnerVolumeSpecName "kube-api-access-ccpcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.684548 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util" (OuterVolumeSpecName: "util") pod "c98b03c2-e740-402d-b2f8-d8ab27224b94" (UID: "c98b03c2-e740-402d-b2f8-d8ab27224b94"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.771239 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccpcb\" (UniqueName: \"kubernetes.io/projected/c98b03c2-e740-402d-b2f8-d8ab27224b94-kube-api-access-ccpcb\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.771290 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-util\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:10 crc kubenswrapper[4687]: I1203 17:54:10.771320 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c98b03c2-e740-402d-b2f8-d8ab27224b94-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:54:11 crc kubenswrapper[4687]: I1203 17:54:11.303235 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" event={"ID":"c98b03c2-e740-402d-b2f8-d8ab27224b94","Type":"ContainerDied","Data":"089cd0c1dc9802479f8691bbe449da918b16a2755103f3b507e2eb3d03a0a886"} Dec 03 17:54:11 crc kubenswrapper[4687]: I1203 17:54:11.303275 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089cd0c1dc9802479f8691bbe449da918b16a2755103f3b507e2eb3d03a0a886" Dec 03 17:54:11 crc kubenswrapper[4687]: I1203 17:54:11.303294 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58" Dec 03 17:54:14 crc kubenswrapper[4687]: I1203 17:54:14.111395 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:54:14 crc kubenswrapper[4687]: I1203 17:54:14.111649 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.643328 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf"] Dec 03 17:54:21 crc kubenswrapper[4687]: E1203 17:54:21.644348 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="extract" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644368 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="extract" Dec 03 17:54:21 crc kubenswrapper[4687]: E1203 17:54:21.644393 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="pull" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644402 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="pull" Dec 03 17:54:21 crc kubenswrapper[4687]: E1203 17:54:21.644442 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="util" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644453 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="util" Dec 03 17:54:21 crc kubenswrapper[4687]: E1203 17:54:21.644464 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644472 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644762 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c55e5e2-5437-468e-9410-605afa2612d9" containerName="console" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.644798 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98b03c2-e740-402d-b2f8-d8ab27224b94" containerName="extract" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.645508 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.650794 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.651060 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.651212 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.651338 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.651894 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k698h" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.699496 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf"] Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.803867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-webhook-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.803924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-apiservice-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.803990 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58jp\" (UniqueName: \"kubernetes.io/projected/20b383be-ffda-4db5-8914-c3a22cfb94ec-kube-api-access-q58jp\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.905422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-webhook-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.905488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-apiservice-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.905604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q58jp\" (UniqueName: \"kubernetes.io/projected/20b383be-ffda-4db5-8914-c3a22cfb94ec-kube-api-access-q58jp\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.924429 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-apiservice-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.924429 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/20b383be-ffda-4db5-8914-c3a22cfb94ec-webhook-cert\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.930828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q58jp\" (UniqueName: \"kubernetes.io/projected/20b383be-ffda-4db5-8914-c3a22cfb94ec-kube-api-access-q58jp\") pod \"metallb-operator-controller-manager-758fc566f8-ssxcf\" (UID: \"20b383be-ffda-4db5-8914-c3a22cfb94ec\") " pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:21 crc kubenswrapper[4687]: I1203 17:54:21.970223 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.024266 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx"] Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.025031 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.027525 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.027642 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-kvcrt" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.029205 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.041458 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx"] Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.110744 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjkvf\" (UniqueName: \"kubernetes.io/projected/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-kube-api-access-qjkvf\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.110797 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-webhook-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.110885 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-apiservice-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.211951 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-apiservice-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.212014 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjkvf\" (UniqueName: \"kubernetes.io/projected/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-kube-api-access-qjkvf\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.212054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-webhook-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.216383 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-apiservice-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.226835 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-webhook-cert\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.234155 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjkvf\" (UniqueName: \"kubernetes.io/projected/3c6529b3-3b9c-4329-8ed7-05431ec4a4bf-kube-api-access-qjkvf\") pod \"metallb-operator-webhook-server-6cfb994ff-8gwcx\" (UID: \"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf\") " pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.238881 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf"] Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.346533 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.368340 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" event={"ID":"20b383be-ffda-4db5-8914-c3a22cfb94ec","Type":"ContainerStarted","Data":"6e832aa59087ada5fa5e9a24da59f96923483268996ad998b269c02c80f4655e"} Dec 03 17:54:22 crc kubenswrapper[4687]: I1203 17:54:22.585214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx"] Dec 03 17:54:22 crc kubenswrapper[4687]: W1203 17:54:22.592244 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c6529b3_3b9c_4329_8ed7_05431ec4a4bf.slice/crio-7e4724ada7db26296ce7670e4f8c59e2bcca47f6ce7ababeeeb4309f88c39770 WatchSource:0}: Error finding container 7e4724ada7db26296ce7670e4f8c59e2bcca47f6ce7ababeeeb4309f88c39770: Status 404 returned error can't find the container with id 7e4724ada7db26296ce7670e4f8c59e2bcca47f6ce7ababeeeb4309f88c39770 Dec 03 17:54:23 crc kubenswrapper[4687]: I1203 17:54:23.380921 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" event={"ID":"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf","Type":"ContainerStarted","Data":"7e4724ada7db26296ce7670e4f8c59e2bcca47f6ce7ababeeeb4309f88c39770"} Dec 03 17:54:25 crc kubenswrapper[4687]: I1203 17:54:25.396913 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" event={"ID":"20b383be-ffda-4db5-8914-c3a22cfb94ec","Type":"ContainerStarted","Data":"9916791bc3c92d372cc304dc601fbf68e6f7b130876f5cfe62edc62364332d18"} Dec 03 17:54:25 crc kubenswrapper[4687]: I1203 17:54:25.397277 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:54:25 crc kubenswrapper[4687]: I1203 17:54:25.417570 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" podStartSLOduration=1.72286648 podStartE2EDuration="4.417549371s" podCreationTimestamp="2025-12-03 17:54:21 +0000 UTC" firstStartedPulling="2025-12-03 17:54:22.257785837 +0000 UTC m=+895.148481270" lastFinishedPulling="2025-12-03 17:54:24.952468728 +0000 UTC m=+897.843164161" observedRunningTime="2025-12-03 17:54:25.415741882 +0000 UTC m=+898.306437325" watchObservedRunningTime="2025-12-03 17:54:25.417549371 +0000 UTC m=+898.308244804" Dec 03 17:54:27 crc kubenswrapper[4687]: I1203 17:54:27.422158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" event={"ID":"3c6529b3-3b9c-4329-8ed7-05431ec4a4bf","Type":"ContainerStarted","Data":"e5d743c80dff3deccf5e4682fbe43a7f0e91af6d182cfcf39ae34b06164093c5"} Dec 03 17:54:27 crc kubenswrapper[4687]: I1203 17:54:27.422522 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:27 crc kubenswrapper[4687]: I1203 17:54:27.464002 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" podStartSLOduration=2.087512007 podStartE2EDuration="6.463986198s" podCreationTimestamp="2025-12-03 17:54:21 +0000 UTC" firstStartedPulling="2025-12-03 17:54:22.595681928 +0000 UTC m=+895.486377361" lastFinishedPulling="2025-12-03 17:54:26.972156119 +0000 UTC m=+899.862851552" observedRunningTime="2025-12-03 17:54:27.462034664 +0000 UTC m=+900.352730107" watchObservedRunningTime="2025-12-03 17:54:27.463986198 +0000 UTC m=+900.354681631" Dec 03 17:54:42 crc kubenswrapper[4687]: I1203 17:54:42.354401 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6cfb994ff-8gwcx" Dec 03 17:54:44 crc kubenswrapper[4687]: I1203 17:54:44.111563 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:54:44 crc kubenswrapper[4687]: I1203 17:54:44.111658 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:55:01 crc kubenswrapper[4687]: I1203 17:55:01.973097 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-758fc566f8-ssxcf" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.666626 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d6gdp"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.669379 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.671687 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.671794 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-b869v" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.672425 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.673201 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.676549 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.677862 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.680533 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.807070 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rzhqb"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.808173 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rzhqb" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.820643 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.821227 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.822013 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rbsjm" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.824435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2m6\" (UniqueName: \"kubernetes.io/projected/ba8d9037-40bd-4f5b-bd59-139f36424600-kube-api-access-mh2m6\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848729 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-startup\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-reloader\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848924 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ww4\" (UniqueName: \"kubernetes.io/projected/bda58d5c-98aa-4889-bbd8-f7336cc0aade-kube-api-access-s8ww4\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.848982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba8d9037-40bd-4f5b-bd59-139f36424600-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.849037 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics-certs\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.849103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-sockets\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.849179 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-conf\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.849724 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-xc95b"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.850577 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.854011 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.879421 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-xc95b"] Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950634 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-metrics-certs\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8ww4\" (UniqueName: \"kubernetes.io/projected/bda58d5c-98aa-4889-bbd8-f7336cc0aade-kube-api-access-s8ww4\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950724 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba8d9037-40bd-4f5b-bd59-139f36424600-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics-certs\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe83569c-2e40-440d-85fc-764d28429dbf-metallb-excludel2\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950824 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-sockets\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.950844 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8kk\" (UniqueName: \"kubernetes.io/projected/fe83569c-2e40-440d-85fc-764d28429dbf-kube-api-access-bm8kk\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951257 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-conf\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951369 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2m6\" (UniqueName: \"kubernetes.io/projected/ba8d9037-40bd-4f5b-bd59-139f36424600-kube-api-access-mh2m6\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951404 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-sockets\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-startup\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951457 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-reloader\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-reloader\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.951865 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.952045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-conf\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.952635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/bda58d5c-98aa-4889-bbd8-f7336cc0aade-frr-startup\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.963661 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bda58d5c-98aa-4889-bbd8-f7336cc0aade-metrics-certs\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.963845 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ba8d9037-40bd-4f5b-bd59-139f36424600-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.967893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8ww4\" (UniqueName: \"kubernetes.io/projected/bda58d5c-98aa-4889-bbd8-f7336cc0aade-kube-api-access-s8ww4\") pod \"frr-k8s-d6gdp\" (UID: \"bda58d5c-98aa-4889-bbd8-f7336cc0aade\") " pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.968430 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2m6\" (UniqueName: \"kubernetes.io/projected/ba8d9037-40bd-4f5b-bd59-139f36424600-kube-api-access-mh2m6\") pod \"frr-k8s-webhook-server-7fcb986d4-z7q2l\" (UID: \"ba8d9037-40bd-4f5b-bd59-139f36424600\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:02 crc kubenswrapper[4687]: I1203 17:55:02.991591 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.001772 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.052795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfkb\" (UniqueName: \"kubernetes.io/projected/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-kube-api-access-mzfkb\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.052867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-metrics-certs\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.052912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.052951 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-metrics-certs\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.052987 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe83569c-2e40-440d-85fc-764d28429dbf-metallb-excludel2\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.053012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8kk\" (UniqueName: \"kubernetes.io/projected/fe83569c-2e40-440d-85fc-764d28429dbf-kube-api-access-bm8kk\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.053063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-cert\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: E1203 17:55:03.053903 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 17:55:03 crc kubenswrapper[4687]: E1203 17:55:03.054093 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist podName:fe83569c-2e40-440d-85fc-764d28429dbf nodeName:}" failed. No retries permitted until 2025-12-03 17:55:03.553944378 +0000 UTC m=+936.444639821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist") pod "speaker-rzhqb" (UID: "fe83569c-2e40-440d-85fc-764d28429dbf") : secret "metallb-memberlist" not found Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.054736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fe83569c-2e40-440d-85fc-764d28429dbf-metallb-excludel2\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.058806 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-metrics-certs\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.072873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8kk\" (UniqueName: \"kubernetes.io/projected/fe83569c-2e40-440d-85fc-764d28429dbf-kube-api-access-bm8kk\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.165786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfkb\" (UniqueName: \"kubernetes.io/projected/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-kube-api-access-mzfkb\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.167486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-metrics-certs\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.167555 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-cert\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.169663 4687 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.171400 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-metrics-certs\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.179959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-cert\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.200842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfkb\" (UniqueName: \"kubernetes.io/projected/bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3-kube-api-access-mzfkb\") pod \"controller-f8648f98b-xc95b\" (UID: \"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3\") " pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.309737 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l"] Dec 03 17:55:03 crc kubenswrapper[4687]: W1203 17:55:03.315004 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba8d9037_40bd_4f5b_bd59_139f36424600.slice/crio-9f632a7d911be3eccca854f01d8311408307acd686af8aaa4901226e2fdd9120 WatchSource:0}: Error finding container 9f632a7d911be3eccca854f01d8311408307acd686af8aaa4901226e2fdd9120: Status 404 returned error can't find the container with id 9f632a7d911be3eccca854f01d8311408307acd686af8aaa4901226e2fdd9120 Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.464670 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.572349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:03 crc kubenswrapper[4687]: E1203 17:55:03.572602 4687 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 17:55:03 crc kubenswrapper[4687]: E1203 17:55:03.572787 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist podName:fe83569c-2e40-440d-85fc-764d28429dbf nodeName:}" failed. No retries permitted until 2025-12-03 17:55:04.57276683 +0000 UTC m=+937.463462263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist") pod "speaker-rzhqb" (UID: "fe83569c-2e40-440d-85fc-764d28429dbf") : secret "metallb-memberlist" not found Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.620339 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" event={"ID":"ba8d9037-40bd-4f5b-bd59-139f36424600","Type":"ContainerStarted","Data":"9f632a7d911be3eccca854f01d8311408307acd686af8aaa4901226e2fdd9120"} Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.621006 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"6bf3c2246443d8aaca75af3e9d8922a505f0d7ba1f625d0d533e9ae4fd589c22"} Dec 03 17:55:03 crc kubenswrapper[4687]: I1203 17:55:03.665700 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-xc95b"] Dec 03 17:55:03 crc kubenswrapper[4687]: W1203 17:55:03.675183 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff2bdf6_ec54_4e9e_8d82_d5ed87643dd3.slice/crio-9af95e0bae4bad40be76eb2aceca4e6916aff77369372b61083686937ade3f0f WatchSource:0}: Error finding container 9af95e0bae4bad40be76eb2aceca4e6916aff77369372b61083686937ade3f0f: Status 404 returned error can't find the container with id 9af95e0bae4bad40be76eb2aceca4e6916aff77369372b61083686937ade3f0f Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.586684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.593636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fe83569c-2e40-440d-85fc-764d28429dbf-memberlist\") pod \"speaker-rzhqb\" (UID: \"fe83569c-2e40-440d-85fc-764d28429dbf\") " pod="metallb-system/speaker-rzhqb" Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.621155 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rzhqb" Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.629598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-xc95b" event={"ID":"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3","Type":"ContainerStarted","Data":"3f56e7e018608202409f89d3902be84388548c9307d51cee8dace57790573993"} Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.629644 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-xc95b" event={"ID":"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3","Type":"ContainerStarted","Data":"2e96cf7f7a45e1e6fa05cc790835cc8891fc64357b25fb59a8174c15417892ea"} Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.629657 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-xc95b" event={"ID":"bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3","Type":"ContainerStarted","Data":"9af95e0bae4bad40be76eb2aceca4e6916aff77369372b61083686937ade3f0f"} Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.629753 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:04 crc kubenswrapper[4687]: W1203 17:55:04.647246 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe83569c_2e40_440d_85fc_764d28429dbf.slice/crio-cc55e6a1734a43254ff57045d0845a56f2eff0fc71e736a9e61fcbaa0821a69f WatchSource:0}: Error finding container cc55e6a1734a43254ff57045d0845a56f2eff0fc71e736a9e61fcbaa0821a69f: Status 404 returned error can't find the container with id cc55e6a1734a43254ff57045d0845a56f2eff0fc71e736a9e61fcbaa0821a69f Dec 03 17:55:04 crc kubenswrapper[4687]: I1203 17:55:04.657895 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-xc95b" podStartSLOduration=2.657873674 podStartE2EDuration="2.657873674s" podCreationTimestamp="2025-12-03 17:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:55:04.650533863 +0000 UTC m=+937.541229316" watchObservedRunningTime="2025-12-03 17:55:04.657873674 +0000 UTC m=+937.548569107" Dec 03 17:55:05 crc kubenswrapper[4687]: I1203 17:55:05.638408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzhqb" event={"ID":"fe83569c-2e40-440d-85fc-764d28429dbf","Type":"ContainerStarted","Data":"f7f3ff9b90a77404ad1901694be7a5e9837efd1dae2b62c6a2b43fe9eb757fbc"} Dec 03 17:55:05 crc kubenswrapper[4687]: I1203 17:55:05.638841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzhqb" event={"ID":"fe83569c-2e40-440d-85fc-764d28429dbf","Type":"ContainerStarted","Data":"6ff0c1f4d0639f0ec7e5f6838a36e5299c50b00af1a738a0ae6ecf1ed1ad5ef4"} Dec 03 17:55:05 crc kubenswrapper[4687]: I1203 17:55:05.638860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rzhqb" event={"ID":"fe83569c-2e40-440d-85fc-764d28429dbf","Type":"ContainerStarted","Data":"cc55e6a1734a43254ff57045d0845a56f2eff0fc71e736a9e61fcbaa0821a69f"} Dec 03 17:55:05 crc kubenswrapper[4687]: I1203 17:55:05.640945 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rzhqb" Dec 03 17:55:05 crc kubenswrapper[4687]: I1203 17:55:05.661717 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rzhqb" podStartSLOduration=3.661698298 podStartE2EDuration="3.661698298s" podCreationTimestamp="2025-12-03 17:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:55:05.660973878 +0000 UTC m=+938.551669311" watchObservedRunningTime="2025-12-03 17:55:05.661698298 +0000 UTC m=+938.552393721" Dec 03 17:55:10 crc kubenswrapper[4687]: I1203 17:55:10.668023 4687 generic.go:334] "Generic (PLEG): container finished" podID="bda58d5c-98aa-4889-bbd8-f7336cc0aade" containerID="baa4d0301e750a984bb42c84d9938d3f411ce8d70cf90bf49b7a4b277801d6b8" exitCode=0 Dec 03 17:55:10 crc kubenswrapper[4687]: I1203 17:55:10.668151 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerDied","Data":"baa4d0301e750a984bb42c84d9938d3f411ce8d70cf90bf49b7a4b277801d6b8"} Dec 03 17:55:10 crc kubenswrapper[4687]: I1203 17:55:10.670899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" event={"ID":"ba8d9037-40bd-4f5b-bd59-139f36424600","Type":"ContainerStarted","Data":"4b88eb5c8c96b9659b1d9a891eb12d1cfa96cfacdb8967df67a2f38ff2c7c9af"} Dec 03 17:55:10 crc kubenswrapper[4687]: I1203 17:55:10.671191 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:10 crc kubenswrapper[4687]: I1203 17:55:10.729156 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" podStartSLOduration=2.120009891 podStartE2EDuration="8.729092835s" podCreationTimestamp="2025-12-03 17:55:02 +0000 UTC" firstStartedPulling="2025-12-03 17:55:03.317639692 +0000 UTC m=+936.208335125" lastFinishedPulling="2025-12-03 17:55:09.926722636 +0000 UTC m=+942.817418069" observedRunningTime="2025-12-03 17:55:10.72889812 +0000 UTC m=+943.619593593" watchObservedRunningTime="2025-12-03 17:55:10.729092835 +0000 UTC m=+943.619788308" Dec 03 17:55:11 crc kubenswrapper[4687]: I1203 17:55:11.681049 4687 generic.go:334] "Generic (PLEG): container finished" podID="bda58d5c-98aa-4889-bbd8-f7336cc0aade" containerID="33543d887af9aa70c0c0f51f7285150d1cb3f1220f3c5d01ed66acd76f2d1ef1" exitCode=0 Dec 03 17:55:11 crc kubenswrapper[4687]: I1203 17:55:11.681147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerDied","Data":"33543d887af9aa70c0c0f51f7285150d1cb3f1220f3c5d01ed66acd76f2d1ef1"} Dec 03 17:55:12 crc kubenswrapper[4687]: I1203 17:55:12.690010 4687 generic.go:334] "Generic (PLEG): container finished" podID="bda58d5c-98aa-4889-bbd8-f7336cc0aade" containerID="83af0136a7b9869149f5827a5c424613b572f7115fa18e3bd0caf6a5016c8c7e" exitCode=0 Dec 03 17:55:12 crc kubenswrapper[4687]: I1203 17:55:12.690056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerDied","Data":"83af0136a7b9869149f5827a5c424613b572f7115fa18e3bd0caf6a5016c8c7e"} Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.468979 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-xc95b" Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.701199 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"4f2570567e81ccb9622f579cf7ee430b9d71eb27ab3b1a07d5c9c16d7e112307"} Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.701252 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"6235a5ffcf56b8cd2c0241fb40d611459c5497897ca6c2ccc07bdfa56ee1ffc5"} Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.701265 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"d353c76538c263eb175df5b92c598026374c747f4ff084b073e96f13192b90a9"} Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.701278 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"216104853b18119b5d9f127cc468fa3fa7e06af55c147a0c4351648ac180792d"} Dec 03 17:55:13 crc kubenswrapper[4687]: I1203 17:55:13.701287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"e25c312324fc21196bab5493c5d2e05551314c945bfd4b086b0fe9f53034ee40"} Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.111791 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.111864 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.111917 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.112601 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.112661 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0" gracePeriod=600 Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.625887 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rzhqb" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.712700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d6gdp" event={"ID":"bda58d5c-98aa-4889-bbd8-f7336cc0aade","Type":"ContainerStarted","Data":"518129b3df58c5f7307f31855560031b6f58e862fe8f17aa1cfa5b4a3a82676d"} Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.712869 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.715264 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0" exitCode=0 Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.715308 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0"} Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.715352 4687 scope.go:117] "RemoveContainer" containerID="343b0a9edf6bdcba6ed9889eac0435890e04eb43294c88a95b2f241b2ffd4273" Dec 03 17:55:14 crc kubenswrapper[4687]: I1203 17:55:14.754090 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d6gdp" podStartSLOduration=6.002069034 podStartE2EDuration="12.754068339s" podCreationTimestamp="2025-12-03 17:55:02 +0000 UTC" firstStartedPulling="2025-12-03 17:55:03.211814379 +0000 UTC m=+936.102509812" lastFinishedPulling="2025-12-03 17:55:09.963813684 +0000 UTC m=+942.854509117" observedRunningTime="2025-12-03 17:55:14.750274985 +0000 UTC m=+947.640970438" watchObservedRunningTime="2025-12-03 17:55:14.754068339 +0000 UTC m=+947.644763772" Dec 03 17:55:15 crc kubenswrapper[4687]: I1203 17:55:15.724176 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a"} Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.634231 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.635463 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.638364 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2lpl9" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.638432 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.638559 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.649753 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.771525 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8x8v\" (UniqueName: \"kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v\") pod \"openstack-operator-index-t7l2r\" (UID: \"230c7cd1-d704-4c59-b682-083781844cd5\") " pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.873430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8x8v\" (UniqueName: \"kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v\") pod \"openstack-operator-index-t7l2r\" (UID: \"230c7cd1-d704-4c59-b682-083781844cd5\") " pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.890310 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8x8v\" (UniqueName: \"kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v\") pod \"openstack-operator-index-t7l2r\" (UID: \"230c7cd1-d704-4c59-b682-083781844cd5\") " pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.984877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:17 crc kubenswrapper[4687]: I1203 17:55:17.992178 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:18 crc kubenswrapper[4687]: I1203 17:55:18.064081 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:18 crc kubenswrapper[4687]: I1203 17:55:18.182425 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:18 crc kubenswrapper[4687]: W1203 17:55:18.189139 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod230c7cd1_d704_4c59_b682_083781844cd5.slice/crio-f59a74a162e1005a4df0fd773325b52d7cb4aaf38e241f498eb2866b357eb245 WatchSource:0}: Error finding container f59a74a162e1005a4df0fd773325b52d7cb4aaf38e241f498eb2866b357eb245: Status 404 returned error can't find the container with id f59a74a162e1005a4df0fd773325b52d7cb4aaf38e241f498eb2866b357eb245 Dec 03 17:55:18 crc kubenswrapper[4687]: I1203 17:55:18.739830 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7l2r" event={"ID":"230c7cd1-d704-4c59-b682-083781844cd5","Type":"ContainerStarted","Data":"f59a74a162e1005a4df0fd773325b52d7cb4aaf38e241f498eb2866b357eb245"} Dec 03 17:55:20 crc kubenswrapper[4687]: I1203 17:55:20.754698 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7l2r" event={"ID":"230c7cd1-d704-4c59-b682-083781844cd5","Type":"ContainerStarted","Data":"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620"} Dec 03 17:55:20 crc kubenswrapper[4687]: I1203 17:55:20.781711 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-t7l2r" podStartSLOduration=1.634826779 podStartE2EDuration="3.781682526s" podCreationTimestamp="2025-12-03 17:55:17 +0000 UTC" firstStartedPulling="2025-12-03 17:55:18.191919709 +0000 UTC m=+951.082615142" lastFinishedPulling="2025-12-03 17:55:20.338775446 +0000 UTC m=+953.229470889" observedRunningTime="2025-12-03 17:55:20.778845358 +0000 UTC m=+953.669540821" watchObservedRunningTime="2025-12-03 17:55:20.781682526 +0000 UTC m=+953.672377999" Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.016322 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.632145 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5r6vj"] Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.633355 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.654664 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5r6vj"] Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.726453 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdlf\" (UniqueName: \"kubernetes.io/projected/8e1a26a4-e1d4-4d8f-a452-a86a688788f3-kube-api-access-nhdlf\") pod \"openstack-operator-index-5r6vj\" (UID: \"8e1a26a4-e1d4-4d8f-a452-a86a688788f3\") " pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.828562 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdlf\" (UniqueName: \"kubernetes.io/projected/8e1a26a4-e1d4-4d8f-a452-a86a688788f3-kube-api-access-nhdlf\") pod \"openstack-operator-index-5r6vj\" (UID: \"8e1a26a4-e1d4-4d8f-a452-a86a688788f3\") " pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.865928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdlf\" (UniqueName: \"kubernetes.io/projected/8e1a26a4-e1d4-4d8f-a452-a86a688788f3-kube-api-access-nhdlf\") pod \"openstack-operator-index-5r6vj\" (UID: \"8e1a26a4-e1d4-4d8f-a452-a86a688788f3\") " pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:21 crc kubenswrapper[4687]: I1203 17:55:21.955420 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.397206 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5r6vj"] Dec 03 17:55:22 crc kubenswrapper[4687]: W1203 17:55:22.411486 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e1a26a4_e1d4_4d8f_a452_a86a688788f3.slice/crio-96ccc6dce5b6370922fb717f2e676532e37dd226f59a71c39eb89006e6dee702 WatchSource:0}: Error finding container 96ccc6dce5b6370922fb717f2e676532e37dd226f59a71c39eb89006e6dee702: Status 404 returned error can't find the container with id 96ccc6dce5b6370922fb717f2e676532e37dd226f59a71c39eb89006e6dee702 Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.772345 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5r6vj" event={"ID":"8e1a26a4-e1d4-4d8f-a452-a86a688788f3","Type":"ContainerStarted","Data":"f2c7b125c65eb598d83e4dd09fdba85714ad91037d16d4417b57062f56eaf851"} Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.772898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5r6vj" event={"ID":"8e1a26a4-e1d4-4d8f-a452-a86a688788f3","Type":"ContainerStarted","Data":"96ccc6dce5b6370922fb717f2e676532e37dd226f59a71c39eb89006e6dee702"} Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.772460 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-t7l2r" podUID="230c7cd1-d704-4c59-b682-083781844cd5" containerName="registry-server" containerID="cri-o://a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620" gracePeriod=2 Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.808233 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5r6vj" podStartSLOduration=1.7319899410000001 podStartE2EDuration="1.808201622s" podCreationTimestamp="2025-12-03 17:55:21 +0000 UTC" firstStartedPulling="2025-12-03 17:55:22.416068996 +0000 UTC m=+955.306764469" lastFinishedPulling="2025-12-03 17:55:22.492280707 +0000 UTC m=+955.382976150" observedRunningTime="2025-12-03 17:55:22.799105913 +0000 UTC m=+955.689801426" watchObservedRunningTime="2025-12-03 17:55:22.808201622 +0000 UTC m=+955.698897105" Dec 03 17:55:22 crc kubenswrapper[4687]: I1203 17:55:22.997435 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d6gdp" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.005666 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-z7q2l" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.147825 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.253581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8x8v\" (UniqueName: \"kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v\") pod \"230c7cd1-d704-4c59-b682-083781844cd5\" (UID: \"230c7cd1-d704-4c59-b682-083781844cd5\") " Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.258418 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v" (OuterVolumeSpecName: "kube-api-access-c8x8v") pod "230c7cd1-d704-4c59-b682-083781844cd5" (UID: "230c7cd1-d704-4c59-b682-083781844cd5"). InnerVolumeSpecName "kube-api-access-c8x8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.355268 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8x8v\" (UniqueName: \"kubernetes.io/projected/230c7cd1-d704-4c59-b682-083781844cd5-kube-api-access-c8x8v\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.778639 4687 generic.go:334] "Generic (PLEG): container finished" podID="230c7cd1-d704-4c59-b682-083781844cd5" containerID="a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620" exitCode=0 Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.778710 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t7l2r" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.778697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7l2r" event={"ID":"230c7cd1-d704-4c59-b682-083781844cd5","Type":"ContainerDied","Data":"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620"} Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.778770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t7l2r" event={"ID":"230c7cd1-d704-4c59-b682-083781844cd5","Type":"ContainerDied","Data":"f59a74a162e1005a4df0fd773325b52d7cb4aaf38e241f498eb2866b357eb245"} Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.778803 4687 scope.go:117] "RemoveContainer" containerID="a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.797597 4687 scope.go:117] "RemoveContainer" containerID="a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620" Dec 03 17:55:23 crc kubenswrapper[4687]: E1203 17:55:23.798763 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620\": container with ID starting with a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620 not found: ID does not exist" containerID="a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.798813 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620"} err="failed to get container status \"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620\": rpc error: code = NotFound desc = could not find container \"a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620\": container with ID starting with a4557d4505341e74bc5e26602e0e886db017b6a5686fad84bb34b1b7968fd620 not found: ID does not exist" Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.804021 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:23 crc kubenswrapper[4687]: I1203 17:55:23.812436 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-t7l2r"] Dec 03 17:55:25 crc kubenswrapper[4687]: I1203 17:55:25.416801 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230c7cd1-d704-4c59-b682-083781844cd5" path="/var/lib/kubelet/pods/230c7cd1-d704-4c59-b682-083781844cd5/volumes" Dec 03 17:55:31 crc kubenswrapper[4687]: I1203 17:55:31.956663 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:31 crc kubenswrapper[4687]: I1203 17:55:31.957106 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:32 crc kubenswrapper[4687]: I1203 17:55:32.003020 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:32 crc kubenswrapper[4687]: I1203 17:55:32.876438 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5r6vj" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.473808 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5"] Dec 03 17:55:37 crc kubenswrapper[4687]: E1203 17:55:37.474335 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="230c7cd1-d704-4c59-b682-083781844cd5" containerName="registry-server" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.474347 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="230c7cd1-d704-4c59-b682-083781844cd5" containerName="registry-server" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.474450 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="230c7cd1-d704-4c59-b682-083781844cd5" containerName="registry-server" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.475209 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.478586 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9p9cm" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.488580 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5"] Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.657455 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.657583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqkw\" (UniqueName: \"kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.657633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.760060 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqkw\" (UniqueName: \"kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.760157 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.760208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.760844 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.761362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:37 crc kubenswrapper[4687]: I1203 17:55:37.798839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqkw\" (UniqueName: \"kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw\") pod \"c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:38 crc kubenswrapper[4687]: I1203 17:55:38.092078 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:38 crc kubenswrapper[4687]: I1203 17:55:38.560013 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5"] Dec 03 17:55:38 crc kubenswrapper[4687]: I1203 17:55:38.890692 4687 generic.go:334] "Generic (PLEG): container finished" podID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerID="b00cc8be7546b0021fb4b8e9035bf52058eaac6fe8eb6f0cef5f548a0508339f" exitCode=0 Dec 03 17:55:38 crc kubenswrapper[4687]: I1203 17:55:38.890729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" event={"ID":"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6","Type":"ContainerDied","Data":"b00cc8be7546b0021fb4b8e9035bf52058eaac6fe8eb6f0cef5f548a0508339f"} Dec 03 17:55:38 crc kubenswrapper[4687]: I1203 17:55:38.890752 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" event={"ID":"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6","Type":"ContainerStarted","Data":"2641d76c0b7d72bf88a8fbf2a4f507feecb9e5d8fa31314842a2345f95643cee"} Dec 03 17:55:39 crc kubenswrapper[4687]: I1203 17:55:39.899313 4687 generic.go:334] "Generic (PLEG): container finished" podID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerID="299d2f69099524a1cd5ed83cc92f83ebb8f2b13d54d8b182852f21bd45e26fba" exitCode=0 Dec 03 17:55:39 crc kubenswrapper[4687]: I1203 17:55:39.899417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" event={"ID":"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6","Type":"ContainerDied","Data":"299d2f69099524a1cd5ed83cc92f83ebb8f2b13d54d8b182852f21bd45e26fba"} Dec 03 17:55:40 crc kubenswrapper[4687]: I1203 17:55:40.915248 4687 generic.go:334] "Generic (PLEG): container finished" podID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerID="3e1b4bb21e86cd7e3954b2183fd1abec283178c09ef942881d3423a835a930ed" exitCode=0 Dec 03 17:55:40 crc kubenswrapper[4687]: I1203 17:55:40.915486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" event={"ID":"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6","Type":"ContainerDied","Data":"3e1b4bb21e86cd7e3954b2183fd1abec283178c09ef942881d3423a835a930ed"} Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.169343 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.170521 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.208401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.208451 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.208482 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvlk\" (UniqueName: \"kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.242074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.310003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.310049 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.310080 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvlk\" (UniqueName: \"kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.310812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.311100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.328642 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvlk\" (UniqueName: \"kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk\") pod \"redhat-marketplace-qkrsz\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.486882 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.906641 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:41 crc kubenswrapper[4687]: I1203 17:55:41.925257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerStarted","Data":"b10b05a4a572df75f73061d81655a965700b6a25693f1c8e8c2193f77ab76c2f"} Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.169917 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.323077 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util\") pod \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.323188 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle\") pod \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.323279 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bqkw\" (UniqueName: \"kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw\") pod \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\" (UID: \"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6\") " Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.332873 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle" (OuterVolumeSpecName: "bundle") pod "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" (UID: "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.334535 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw" (OuterVolumeSpecName: "kube-api-access-8bqkw") pod "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" (UID: "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6"). InnerVolumeSpecName "kube-api-access-8bqkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.345664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util" (OuterVolumeSpecName: "util") pod "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" (UID: "af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.425238 4687 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-util\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.425266 4687 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.425277 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bqkw\" (UniqueName: \"kubernetes.io/projected/af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6-kube-api-access-8bqkw\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.933438 4687 generic.go:334] "Generic (PLEG): container finished" podID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerID="f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385" exitCode=0 Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.933494 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerDied","Data":"f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385"} Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.937930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" event={"ID":"af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6","Type":"ContainerDied","Data":"2641d76c0b7d72bf88a8fbf2a4f507feecb9e5d8fa31314842a2345f95643cee"} Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.937961 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2641d76c0b7d72bf88a8fbf2a4f507feecb9e5d8fa31314842a2345f95643cee" Dec 03 17:55:42 crc kubenswrapper[4687]: I1203 17:55:42.938009 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5" Dec 03 17:55:43 crc kubenswrapper[4687]: I1203 17:55:43.946958 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerStarted","Data":"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061"} Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.936521 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp"] Dec 03 17:55:44 crc kubenswrapper[4687]: E1203 17:55:44.936752 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="extract" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.936768 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="extract" Dec 03 17:55:44 crc kubenswrapper[4687]: E1203 17:55:44.936779 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="util" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.936786 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="util" Dec 03 17:55:44 crc kubenswrapper[4687]: E1203 17:55:44.936796 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="pull" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.936803 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="pull" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.936899 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6" containerName="extract" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.937342 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.940711 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-bbfln" Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.954189 4687 generic.go:334] "Generic (PLEG): container finished" podID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerID="c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061" exitCode=0 Dec 03 17:55:44 crc kubenswrapper[4687]: I1203 17:55:44.954241 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerDied","Data":"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061"} Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.053746 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp"] Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.059614 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fhqh\" (UniqueName: \"kubernetes.io/projected/c70399a2-304f-40f7-9f8e-b566d290ede2-kube-api-access-5fhqh\") pod \"openstack-operator-controller-operator-586db6c45c-hj8pp\" (UID: \"c70399a2-304f-40f7-9f8e-b566d290ede2\") " pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.161449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fhqh\" (UniqueName: \"kubernetes.io/projected/c70399a2-304f-40f7-9f8e-b566d290ede2-kube-api-access-5fhqh\") pod \"openstack-operator-controller-operator-586db6c45c-hj8pp\" (UID: \"c70399a2-304f-40f7-9f8e-b566d290ede2\") " pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.186231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fhqh\" (UniqueName: \"kubernetes.io/projected/c70399a2-304f-40f7-9f8e-b566d290ede2-kube-api-access-5fhqh\") pod \"openstack-operator-controller-operator-586db6c45c-hj8pp\" (UID: \"c70399a2-304f-40f7-9f8e-b566d290ede2\") " pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.263206 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.492805 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp"] Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.965039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerStarted","Data":"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8"} Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.967016 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" event={"ID":"c70399a2-304f-40f7-9f8e-b566d290ede2","Type":"ContainerStarted","Data":"94aefed5ac97eec70aaad61e9f8ef5273b0c772a5609b2f22711cab95a4b911f"} Dec 03 17:55:45 crc kubenswrapper[4687]: I1203 17:55:45.988776 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qkrsz" podStartSLOduration=2.389231792 podStartE2EDuration="4.988750362s" podCreationTimestamp="2025-12-03 17:55:41 +0000 UTC" firstStartedPulling="2025-12-03 17:55:42.935523401 +0000 UTC m=+975.826218834" lastFinishedPulling="2025-12-03 17:55:45.535041971 +0000 UTC m=+978.425737404" observedRunningTime="2025-12-03 17:55:45.983262334 +0000 UTC m=+978.873957777" watchObservedRunningTime="2025-12-03 17:55:45.988750362 +0000 UTC m=+978.879445795" Dec 03 17:55:49 crc kubenswrapper[4687]: I1203 17:55:49.995415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" event={"ID":"c70399a2-304f-40f7-9f8e-b566d290ede2","Type":"ContainerStarted","Data":"86e0b894d8f5ee2c79ed805ec4ce642574c17476a12033eba5cceb3ef601adba"} Dec 03 17:55:49 crc kubenswrapper[4687]: I1203 17:55:49.996779 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:50 crc kubenswrapper[4687]: I1203 17:55:50.039059 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" podStartSLOduration=2.23324669 podStartE2EDuration="6.039040789s" podCreationTimestamp="2025-12-03 17:55:44 +0000 UTC" firstStartedPulling="2025-12-03 17:55:45.501409792 +0000 UTC m=+978.392105225" lastFinishedPulling="2025-12-03 17:55:49.307203891 +0000 UTC m=+982.197899324" observedRunningTime="2025-12-03 17:55:50.035495503 +0000 UTC m=+982.926190946" watchObservedRunningTime="2025-12-03 17:55:50.039040789 +0000 UTC m=+982.929736232" Dec 03 17:55:51 crc kubenswrapper[4687]: I1203 17:55:51.487582 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:51 crc kubenswrapper[4687]: I1203 17:55:51.488001 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:51 crc kubenswrapper[4687]: I1203 17:55:51.528926 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:52 crc kubenswrapper[4687]: I1203 17:55:52.042238 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:53 crc kubenswrapper[4687]: I1203 17:55:53.963277 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:55:53 crc kubenswrapper[4687]: I1203 17:55:53.964704 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.006686 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.097570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr7gn\" (UniqueName: \"kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.097626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.097683 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.198606 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr7gn\" (UniqueName: \"kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.198934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.199223 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.199785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.199909 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.219435 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr7gn\" (UniqueName: \"kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn\") pod \"certified-operators-4qs67\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.300745 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.642265 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.966284 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:54 crc kubenswrapper[4687]: I1203 17:55:54.966547 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qkrsz" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="registry-server" containerID="cri-o://fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8" gracePeriod=2 Dec 03 17:55:55 crc kubenswrapper[4687]: I1203 17:55:55.037883 4687 generic.go:334] "Generic (PLEG): container finished" podID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerID="7ef169b7e27d9288a09ac2d4a26f60fccf7bafd82db5164579518b022750cda6" exitCode=0 Dec 03 17:55:55 crc kubenswrapper[4687]: I1203 17:55:55.037928 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerDied","Data":"7ef169b7e27d9288a09ac2d4a26f60fccf7bafd82db5164579518b022750cda6"} Dec 03 17:55:55 crc kubenswrapper[4687]: I1203 17:55:55.037956 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerStarted","Data":"be418ca9aa4fe4ebd712af1ab519255842aa2f3817f23da5dc6c187a333e164d"} Dec 03 17:55:55 crc kubenswrapper[4687]: I1203 17:55:55.265974 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-586db6c45c-hj8pp" Dec 03 17:55:55 crc kubenswrapper[4687]: I1203 17:55:55.934954 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.023000 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content\") pod \"5801f3f7-5f56-4130-b6dc-093a46663f85\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.023147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvlk\" (UniqueName: \"kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk\") pod \"5801f3f7-5f56-4130-b6dc-093a46663f85\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.023214 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities\") pod \"5801f3f7-5f56-4130-b6dc-093a46663f85\" (UID: \"5801f3f7-5f56-4130-b6dc-093a46663f85\") " Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.024204 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities" (OuterVolumeSpecName: "utilities") pod "5801f3f7-5f56-4130-b6dc-093a46663f85" (UID: "5801f3f7-5f56-4130-b6dc-093a46663f85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.036451 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk" (OuterVolumeSpecName: "kube-api-access-vlvlk") pod "5801f3f7-5f56-4130-b6dc-093a46663f85" (UID: "5801f3f7-5f56-4130-b6dc-093a46663f85"). InnerVolumeSpecName "kube-api-access-vlvlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.040062 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5801f3f7-5f56-4130-b6dc-093a46663f85" (UID: "5801f3f7-5f56-4130-b6dc-093a46663f85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.045925 4687 generic.go:334] "Generic (PLEG): container finished" podID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerID="fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8" exitCode=0 Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.045979 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerDied","Data":"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8"} Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.045987 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qkrsz" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.046004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qkrsz" event={"ID":"5801f3f7-5f56-4130-b6dc-093a46663f85","Type":"ContainerDied","Data":"b10b05a4a572df75f73061d81655a965700b6a25693f1c8e8c2193f77ab76c2f"} Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.046020 4687 scope.go:117] "RemoveContainer" containerID="fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.048559 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerStarted","Data":"235efdf1622f60093bce6f1627fb2f339c484e36eb65998ed89c2b7e4036266b"} Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.063103 4687 scope.go:117] "RemoveContainer" containerID="c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.082798 4687 scope.go:117] "RemoveContainer" containerID="f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.092723 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.097041 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qkrsz"] Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106079 4687 scope.go:117] "RemoveContainer" containerID="fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8" Dec 03 17:55:56 crc kubenswrapper[4687]: E1203 17:55:56.106410 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8\": container with ID starting with fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8 not found: ID does not exist" containerID="fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106473 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8"} err="failed to get container status \"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8\": rpc error: code = NotFound desc = could not find container \"fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8\": container with ID starting with fdb4d6023044c64ef7a9c2be0075701c849222023a38fd16121a63f514d2d3f8 not found: ID does not exist" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106511 4687 scope.go:117] "RemoveContainer" containerID="c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061" Dec 03 17:55:56 crc kubenswrapper[4687]: E1203 17:55:56.106699 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061\": container with ID starting with c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061 not found: ID does not exist" containerID="c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106719 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061"} err="failed to get container status \"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061\": rpc error: code = NotFound desc = could not find container \"c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061\": container with ID starting with c3d52516c29d9e06b2cde84ed0d584a5f1026488d5d8b8f34dd173df5b0b7061 not found: ID does not exist" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106733 4687 scope.go:117] "RemoveContainer" containerID="f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385" Dec 03 17:55:56 crc kubenswrapper[4687]: E1203 17:55:56.106928 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385\": container with ID starting with f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385 not found: ID does not exist" containerID="f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.106953 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385"} err="failed to get container status \"f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385\": rpc error: code = NotFound desc = could not find container \"f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385\": container with ID starting with f885db3641e41a0bac75b71a04a77ac5f9b67b48d8774558f391ee88d59bd385 not found: ID does not exist" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.124378 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvlk\" (UniqueName: \"kubernetes.io/projected/5801f3f7-5f56-4130-b6dc-093a46663f85-kube-api-access-vlvlk\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.124409 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:56 crc kubenswrapper[4687]: I1203 17:55:56.124424 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5801f3f7-5f56-4130-b6dc-093a46663f85-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.055318 4687 generic.go:334] "Generic (PLEG): container finished" podID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerID="235efdf1622f60093bce6f1627fb2f339c484e36eb65998ed89c2b7e4036266b" exitCode=0 Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.055372 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerDied","Data":"235efdf1622f60093bce6f1627fb2f339c484e36eb65998ed89c2b7e4036266b"} Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.421828 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" path="/var/lib/kubelet/pods/5801f3f7-5f56-4130-b6dc-093a46663f85/volumes" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.568011 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:55:57 crc kubenswrapper[4687]: E1203 17:55:57.571936 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="registry-server" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.571974 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="registry-server" Dec 03 17:55:57 crc kubenswrapper[4687]: E1203 17:55:57.572052 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="extract-content" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.572103 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="extract-content" Dec 03 17:55:57 crc kubenswrapper[4687]: E1203 17:55:57.572403 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="extract-utilities" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.572463 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="extract-utilities" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.574181 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5801f3f7-5f56-4130-b6dc-093a46663f85" containerName="registry-server" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.578765 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.582357 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.745544 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2dl\" (UniqueName: \"kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.745894 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.746062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.847843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.847922 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.847983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt2dl\" (UniqueName: \"kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.849065 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.849212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.867749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt2dl\" (UniqueName: \"kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl\") pod \"community-operators-25z2c\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:57 crc kubenswrapper[4687]: I1203 17:55:57.908559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:55:58 crc kubenswrapper[4687]: I1203 17:55:58.064672 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerStarted","Data":"821ee45ba4bd9effce77fd5ba01c4c8b398fd4c0a6d10010c3a8b682896daa9b"} Dec 03 17:55:58 crc kubenswrapper[4687]: I1203 17:55:58.097230 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4qs67" podStartSLOduration=2.6812905049999998 podStartE2EDuration="5.097210693s" podCreationTimestamp="2025-12-03 17:55:53 +0000 UTC" firstStartedPulling="2025-12-03 17:55:55.039854661 +0000 UTC m=+987.930550094" lastFinishedPulling="2025-12-03 17:55:57.455774849 +0000 UTC m=+990.346470282" observedRunningTime="2025-12-03 17:55:58.090905982 +0000 UTC m=+990.981601415" watchObservedRunningTime="2025-12-03 17:55:58.097210693 +0000 UTC m=+990.987906126" Dec 03 17:55:58 crc kubenswrapper[4687]: I1203 17:55:58.189499 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:55:58 crc kubenswrapper[4687]: W1203 17:55:58.193669 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod145a41cc_7717_4143_b9a9_5077590fc210.slice/crio-8e16859696e1db6fe7f81062fdf9c10d651d1a751f9d420ee99488b951564797 WatchSource:0}: Error finding container 8e16859696e1db6fe7f81062fdf9c10d651d1a751f9d420ee99488b951564797: Status 404 returned error can't find the container with id 8e16859696e1db6fe7f81062fdf9c10d651d1a751f9d420ee99488b951564797 Dec 03 17:55:59 crc kubenswrapper[4687]: I1203 17:55:59.072553 4687 generic.go:334] "Generic (PLEG): container finished" podID="145a41cc-7717-4143-b9a9-5077590fc210" containerID="6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0" exitCode=0 Dec 03 17:55:59 crc kubenswrapper[4687]: I1203 17:55:59.072627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerDied","Data":"6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0"} Dec 03 17:55:59 crc kubenswrapper[4687]: I1203 17:55:59.072909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerStarted","Data":"8e16859696e1db6fe7f81062fdf9c10d651d1a751f9d420ee99488b951564797"} Dec 03 17:56:00 crc kubenswrapper[4687]: I1203 17:56:00.080002 4687 generic.go:334] "Generic (PLEG): container finished" podID="145a41cc-7717-4143-b9a9-5077590fc210" containerID="01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8" exitCode=0 Dec 03 17:56:00 crc kubenswrapper[4687]: I1203 17:56:00.080090 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerDied","Data":"01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8"} Dec 03 17:56:01 crc kubenswrapper[4687]: I1203 17:56:01.088488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerStarted","Data":"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d"} Dec 03 17:56:01 crc kubenswrapper[4687]: I1203 17:56:01.104056 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-25z2c" podStartSLOduration=2.720881582 podStartE2EDuration="4.104037239s" podCreationTimestamp="2025-12-03 17:55:57 +0000 UTC" firstStartedPulling="2025-12-03 17:55:59.07398091 +0000 UTC m=+991.964676343" lastFinishedPulling="2025-12-03 17:56:00.457136577 +0000 UTC m=+993.347832000" observedRunningTime="2025-12-03 17:56:01.103724831 +0000 UTC m=+993.994420274" watchObservedRunningTime="2025-12-03 17:56:01.104037239 +0000 UTC m=+993.994732672" Dec 03 17:56:04 crc kubenswrapper[4687]: I1203 17:56:04.301249 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:04 crc kubenswrapper[4687]: I1203 17:56:04.301596 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:04 crc kubenswrapper[4687]: I1203 17:56:04.340982 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:05 crc kubenswrapper[4687]: I1203 17:56:05.214845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:05 crc kubenswrapper[4687]: I1203 17:56:05.555143 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:56:07 crc kubenswrapper[4687]: I1203 17:56:07.128567 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4qs67" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="registry-server" containerID="cri-o://821ee45ba4bd9effce77fd5ba01c4c8b398fd4c0a6d10010c3a8b682896daa9b" gracePeriod=2 Dec 03 17:56:07 crc kubenswrapper[4687]: I1203 17:56:07.909676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:07 crc kubenswrapper[4687]: I1203 17:56:07.912384 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.014661 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.136078 4687 generic.go:334] "Generic (PLEG): container finished" podID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerID="821ee45ba4bd9effce77fd5ba01c4c8b398fd4c0a6d10010c3a8b682896daa9b" exitCode=0 Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.136150 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerDied","Data":"821ee45ba4bd9effce77fd5ba01c4c8b398fd4c0a6d10010c3a8b682896daa9b"} Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.174024 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.605898 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.692524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr7gn\" (UniqueName: \"kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn\") pod \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.692585 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities\") pod \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.692717 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content\") pod \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\" (UID: \"524b5142-0cc6-4f1b-9daf-4a7e32da6a56\") " Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.693918 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities" (OuterVolumeSpecName: "utilities") pod "524b5142-0cc6-4f1b-9daf-4a7e32da6a56" (UID: "524b5142-0cc6-4f1b-9daf-4a7e32da6a56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.715554 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn" (OuterVolumeSpecName: "kube-api-access-mr7gn") pod "524b5142-0cc6-4f1b-9daf-4a7e32da6a56" (UID: "524b5142-0cc6-4f1b-9daf-4a7e32da6a56"). InnerVolumeSpecName "kube-api-access-mr7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.755008 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "524b5142-0cc6-4f1b-9daf-4a7e32da6a56" (UID: "524b5142-0cc6-4f1b-9daf-4a7e32da6a56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.793667 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr7gn\" (UniqueName: \"kubernetes.io/projected/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-kube-api-access-mr7gn\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.793707 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:08 crc kubenswrapper[4687]: I1203 17:56:08.793719 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/524b5142-0cc6-4f1b-9daf-4a7e32da6a56-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.143495 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qs67" Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.143495 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qs67" event={"ID":"524b5142-0cc6-4f1b-9daf-4a7e32da6a56","Type":"ContainerDied","Data":"be418ca9aa4fe4ebd712af1ab519255842aa2f3817f23da5dc6c187a333e164d"} Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.143567 4687 scope.go:117] "RemoveContainer" containerID="821ee45ba4bd9effce77fd5ba01c4c8b398fd4c0a6d10010c3a8b682896daa9b" Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.155439 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.161916 4687 scope.go:117] "RemoveContainer" containerID="235efdf1622f60093bce6f1627fb2f339c484e36eb65998ed89c2b7e4036266b" Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.179557 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.179712 4687 scope.go:117] "RemoveContainer" containerID="7ef169b7e27d9288a09ac2d4a26f60fccf7bafd82db5164579518b022750cda6" Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.193951 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4qs67"] Dec 03 17:56:09 crc kubenswrapper[4687]: I1203 17:56:09.414235 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" path="/var/lib/kubelet/pods/524b5142-0cc6-4f1b-9daf-4a7e32da6a56/volumes" Dec 03 17:56:11 crc kubenswrapper[4687]: I1203 17:56:11.156374 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-25z2c" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="registry-server" containerID="cri-o://95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d" gracePeriod=2 Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.025885 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.158343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content\") pod \"145a41cc-7717-4143-b9a9-5077590fc210\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.158402 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities\") pod \"145a41cc-7717-4143-b9a9-5077590fc210\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.158475 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt2dl\" (UniqueName: \"kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl\") pod \"145a41cc-7717-4143-b9a9-5077590fc210\" (UID: \"145a41cc-7717-4143-b9a9-5077590fc210\") " Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.160284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities" (OuterVolumeSpecName: "utilities") pod "145a41cc-7717-4143-b9a9-5077590fc210" (UID: "145a41cc-7717-4143-b9a9-5077590fc210"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166297 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl" (OuterVolumeSpecName: "kube-api-access-nt2dl") pod "145a41cc-7717-4143-b9a9-5077590fc210" (UID: "145a41cc-7717-4143-b9a9-5077590fc210"). InnerVolumeSpecName "kube-api-access-nt2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166560 4687 generic.go:334] "Generic (PLEG): container finished" podID="145a41cc-7717-4143-b9a9-5077590fc210" containerID="95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d" exitCode=0 Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166674 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerDied","Data":"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d"} Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166790 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-25z2c" event={"ID":"145a41cc-7717-4143-b9a9-5077590fc210","Type":"ContainerDied","Data":"8e16859696e1db6fe7f81062fdf9c10d651d1a751f9d420ee99488b951564797"} Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166725 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-25z2c" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.166849 4687 scope.go:117] "RemoveContainer" containerID="95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.209915 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "145a41cc-7717-4143-b9a9-5077590fc210" (UID: "145a41cc-7717-4143-b9a9-5077590fc210"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.228567 4687 scope.go:117] "RemoveContainer" containerID="01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.262037 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt2dl\" (UniqueName: \"kubernetes.io/projected/145a41cc-7717-4143-b9a9-5077590fc210-kube-api-access-nt2dl\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.262086 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.262102 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/145a41cc-7717-4143-b9a9-5077590fc210-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.262924 4687 scope.go:117] "RemoveContainer" containerID="6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.287339 4687 scope.go:117] "RemoveContainer" containerID="95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d" Dec 03 17:56:12 crc kubenswrapper[4687]: E1203 17:56:12.287829 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d\": container with ID starting with 95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d not found: ID does not exist" containerID="95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.287874 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d"} err="failed to get container status \"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d\": rpc error: code = NotFound desc = could not find container \"95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d\": container with ID starting with 95bc085599392b2d5c5bb48c105ced5d353cee53f5cd58364630f0efac2fbf6d not found: ID does not exist" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.287905 4687 scope.go:117] "RemoveContainer" containerID="01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8" Dec 03 17:56:12 crc kubenswrapper[4687]: E1203 17:56:12.289760 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8\": container with ID starting with 01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8 not found: ID does not exist" containerID="01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.289800 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8"} err="failed to get container status \"01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8\": rpc error: code = NotFound desc = could not find container \"01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8\": container with ID starting with 01c918fbdb000a1d91878587f3921256205109600235ede099738eda860dc1a8 not found: ID does not exist" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.289815 4687 scope.go:117] "RemoveContainer" containerID="6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0" Dec 03 17:56:12 crc kubenswrapper[4687]: E1203 17:56:12.290292 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0\": container with ID starting with 6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0 not found: ID does not exist" containerID="6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.290328 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0"} err="failed to get container status \"6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0\": rpc error: code = NotFound desc = could not find container \"6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0\": container with ID starting with 6872aca2c1b735e55f729236189994a50b3a84b8e4cb55bed43bf5008a5dfde0 not found: ID does not exist" Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.501659 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:56:12 crc kubenswrapper[4687]: I1203 17:56:12.510527 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-25z2c"] Dec 03 17:56:13 crc kubenswrapper[4687]: I1203 17:56:13.417255 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="145a41cc-7717-4143-b9a9-5077590fc210" path="/var/lib/kubelet/pods/145a41cc-7717-4143-b9a9-5077590fc210/volumes" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.579487 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6"] Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.580644 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="extract-content" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.580728 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="extract-content" Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.580796 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="extract-utilities" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.580878 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="extract-utilities" Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.580942 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581003 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.581233 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="extract-utilities" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581308 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="extract-utilities" Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.581366 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581420 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: E1203 17:56:14.581478 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="extract-content" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581528 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="extract-content" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581691 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="145a41cc-7717-4143-b9a9-5077590fc210" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.581759 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="524b5142-0cc6-4f1b-9daf-4a7e32da6a56" containerName="registry-server" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.582500 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.585933 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8r5vb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.589607 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.600586 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.601778 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.605632 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-q7t9r" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.639248 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.654925 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.656158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.661471 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pzn6z" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.661653 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.662734 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.667320 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ql4t2" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.696109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c567\" (UniqueName: \"kubernetes.io/projected/496e4d0a-a886-4d53-993c-66081d8843ae-kube-api-access-8c567\") pod \"glance-operator-controller-manager-77987cd8cd-nwzp4\" (UID: \"496e4d0a-a886-4d53-993c-66081d8843ae\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.696255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-655qc\" (UniqueName: \"kubernetes.io/projected/d3d2df8d-6f3d-4f5d-afd3-cef00553188e-kube-api-access-655qc\") pod \"barbican-operator-controller-manager-7d9dfd778-zmzr6\" (UID: \"d3d2df8d-6f3d-4f5d-afd3-cef00553188e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.696298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g44m\" (UniqueName: \"kubernetes.io/projected/f7046b74-0868-4ee1-b917-56e695a94d16-kube-api-access-6g44m\") pod \"cinder-operator-controller-manager-859b6ccc6-fn5xb\" (UID: \"f7046b74-0868-4ee1-b917-56e695a94d16\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.696321 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdq9h\" (UniqueName: \"kubernetes.io/projected/6fa88489-3c47-4369-9f87-a3f029f75a42-kube-api-access-mdq9h\") pod \"designate-operator-controller-manager-78b4bc895b-6xgff\" (UID: \"6fa88489-3c47-4369-9f87-a3f029f75a42\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.709450 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.735396 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.749656 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.750674 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.752601 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-vm2tj" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.763432 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.792239 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.793683 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.797109 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ww9mw" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.801309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-655qc\" (UniqueName: \"kubernetes.io/projected/d3d2df8d-6f3d-4f5d-afd3-cef00553188e-kube-api-access-655qc\") pod \"barbican-operator-controller-manager-7d9dfd778-zmzr6\" (UID: \"d3d2df8d-6f3d-4f5d-afd3-cef00553188e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.801364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g44m\" (UniqueName: \"kubernetes.io/projected/f7046b74-0868-4ee1-b917-56e695a94d16-kube-api-access-6g44m\") pod \"cinder-operator-controller-manager-859b6ccc6-fn5xb\" (UID: \"f7046b74-0868-4ee1-b917-56e695a94d16\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.801380 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdq9h\" (UniqueName: \"kubernetes.io/projected/6fa88489-3c47-4369-9f87-a3f029f75a42-kube-api-access-mdq9h\") pod \"designate-operator-controller-manager-78b4bc895b-6xgff\" (UID: \"6fa88489-3c47-4369-9f87-a3f029f75a42\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.801422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c567\" (UniqueName: \"kubernetes.io/projected/496e4d0a-a886-4d53-993c-66081d8843ae-kube-api-access-8c567\") pod \"glance-operator-controller-manager-77987cd8cd-nwzp4\" (UID: \"496e4d0a-a886-4d53-993c-66081d8843ae\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.801964 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lx2md"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.803163 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.810522 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.810986 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nwzck" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.813230 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.814340 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.822614 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.823451 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pb6m7" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.843839 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-655qc\" (UniqueName: \"kubernetes.io/projected/d3d2df8d-6f3d-4f5d-afd3-cef00553188e-kube-api-access-655qc\") pod \"barbican-operator-controller-manager-7d9dfd778-zmzr6\" (UID: \"d3d2df8d-6f3d-4f5d-afd3-cef00553188e\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.844604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g44m\" (UniqueName: \"kubernetes.io/projected/f7046b74-0868-4ee1-b917-56e695a94d16-kube-api-access-6g44m\") pod \"cinder-operator-controller-manager-859b6ccc6-fn5xb\" (UID: \"f7046b74-0868-4ee1-b917-56e695a94d16\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.856765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c567\" (UniqueName: \"kubernetes.io/projected/496e4d0a-a886-4d53-993c-66081d8843ae-kube-api-access-8c567\") pod \"glance-operator-controller-manager-77987cd8cd-nwzp4\" (UID: \"496e4d0a-a886-4d53-993c-66081d8843ae\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.856821 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdq9h\" (UniqueName: \"kubernetes.io/projected/6fa88489-3c47-4369-9f87-a3f029f75a42-kube-api-access-mdq9h\") pod \"designate-operator-controller-manager-78b4bc895b-6xgff\" (UID: \"6fa88489-3c47-4369-9f87-a3f029f75a42\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.856896 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lx2md"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.862065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.882262 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.883349 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.889179 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-l5wqx" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.902623 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfxx\" (UniqueName: \"kubernetes.io/projected/0e3acf7a-4766-4f89-9f70-d5ec2690318b-kube-api-access-tcfxx\") pod \"horizon-operator-controller-manager-68c6d99b8f-sdbgv\" (UID: \"0e3acf7a-4766-4f89-9f70-d5ec2690318b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.902719 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rqrx\" (UniqueName: \"kubernetes.io/projected/b63b97e0-be73-4e96-9904-9f5c030a0afb-kube-api-access-8rqrx\") pod \"heat-operator-controller-manager-5f64f6f8bb-h6x45\" (UID: \"b63b97e0-be73-4e96-9904-9f5c030a0afb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.903179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.925150 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.926323 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.929439 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fc2r2" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.930179 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.947575 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.967400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.981301 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.982700 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.987909 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gd7c2" Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.996973 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5"] Dec 03 17:56:14 crc kubenswrapper[4687]: I1203 17:56:14.998484 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.000481 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dc4hw" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009672 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfxx\" (UniqueName: \"kubernetes.io/projected/0e3acf7a-4766-4f89-9f70-d5ec2690318b-kube-api-access-tcfxx\") pod \"horizon-operator-controller-manager-68c6d99b8f-sdbgv\" (UID: \"0e3acf7a-4766-4f89-9f70-d5ec2690318b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009730 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvkh\" (UniqueName: \"kubernetes.io/projected/e48eab37-9bd2-4f8d-892a-4436c68bab21-kube-api-access-9zvkh\") pod \"ironic-operator-controller-manager-6c548fd776-fcwrt\" (UID: \"e48eab37-9bd2-4f8d-892a-4436c68bab21\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqzw\" (UniqueName: \"kubernetes.io/projected/e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b-kube-api-access-thqzw\") pod \"keystone-operator-controller-manager-7765d96ddf-mzvdw\" (UID: \"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rqrx\" (UniqueName: \"kubernetes.io/projected/b63b97e0-be73-4e96-9904-9f5c030a0afb-kube-api-access-8rqrx\") pod \"heat-operator-controller-manager-5f64f6f8bb-h6x45\" (UID: \"b63b97e0-be73-4e96-9904-9f5c030a0afb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.009895 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjm7w\" (UniqueName: \"kubernetes.io/projected/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-kube-api-access-pjm7w\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.013818 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.017362 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.026194 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.027427 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.037058 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-p96wf" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.039661 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfxx\" (UniqueName: \"kubernetes.io/projected/0e3acf7a-4766-4f89-9f70-d5ec2690318b-kube-api-access-tcfxx\") pod \"horizon-operator-controller-manager-68c6d99b8f-sdbgv\" (UID: \"0e3acf7a-4766-4f89-9f70-d5ec2690318b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.051572 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rqrx\" (UniqueName: \"kubernetes.io/projected/b63b97e0-be73-4e96-9904-9f5c030a0afb-kube-api-access-8rqrx\") pod \"heat-operator-controller-manager-5f64f6f8bb-h6x45\" (UID: \"b63b97e0-be73-4e96-9904-9f5c030a0afb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.073016 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.084192 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121062 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121700 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfl2\" (UniqueName: \"kubernetes.io/projected/491fb200-3ef9-4833-83c6-22b575b46998-kube-api-access-sqfl2\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bfwb6\" (UID: \"491fb200-3ef9-4833-83c6-22b575b46998\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121728 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121749 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zvkh\" (UniqueName: \"kubernetes.io/projected/e48eab37-9bd2-4f8d-892a-4436c68bab21-kube-api-access-9zvkh\") pod \"ironic-operator-controller-manager-6c548fd776-fcwrt\" (UID: \"e48eab37-9bd2-4f8d-892a-4436c68bab21\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6btp\" (UniqueName: \"kubernetes.io/projected/379ff892-6dae-4b1b-9ae1-f6b7da9f4db6-kube-api-access-p6btp\") pod \"nova-operator-controller-manager-697bc559fc-9pldz\" (UID: \"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121800 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gk82\" (UniqueName: \"kubernetes.io/projected/d57e7a62-6958-4e64-98e6-a22857b00e32-kube-api-access-7gk82\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-7ftp5\" (UID: \"d57e7a62-6958-4e64-98e6-a22857b00e32\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqzw\" (UniqueName: \"kubernetes.io/projected/e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b-kube-api-access-thqzw\") pod \"keystone-operator-controller-manager-7765d96ddf-mzvdw\" (UID: \"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xr4n\" (UniqueName: \"kubernetes.io/projected/59db1fe9-9d85-4346-8718-4e9139c8acb9-kube-api-access-8xr4n\") pod \"manila-operator-controller-manager-7c79b5df47-hrlqq\" (UID: \"59db1fe9-9d85-4346-8718-4e9139c8acb9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.121873 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjm7w\" (UniqueName: \"kubernetes.io/projected/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-kube-api-access-pjm7w\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.122010 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.122079 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:15.622052525 +0000 UTC m=+1008.512747968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.127193 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.141188 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.151359 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.152616 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.156294 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqzw\" (UniqueName: \"kubernetes.io/projected/e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b-kube-api-access-thqzw\") pod \"keystone-operator-controller-manager-7765d96ddf-mzvdw\" (UID: \"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.157945 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8298s" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.162734 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.164353 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.164769 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjm7w\" (UniqueName: \"kubernetes.io/projected/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-kube-api-access-pjm7w\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.166715 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vhlfm" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.182669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zvkh\" (UniqueName: \"kubernetes.io/projected/e48eab37-9bd2-4f8d-892a-4436c68bab21-kube-api-access-9zvkh\") pod \"ironic-operator-controller-manager-6c548fd776-fcwrt\" (UID: \"e48eab37-9bd2-4f8d-892a-4436c68bab21\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.200828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.205666 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.207403 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.218644 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.233552 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfl2\" (UniqueName: \"kubernetes.io/projected/491fb200-3ef9-4833-83c6-22b575b46998-kube-api-access-sqfl2\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bfwb6\" (UID: \"491fb200-3ef9-4833-83c6-22b575b46998\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.233623 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6btp\" (UniqueName: \"kubernetes.io/projected/379ff892-6dae-4b1b-9ae1-f6b7da9f4db6-kube-api-access-p6btp\") pod \"nova-operator-controller-manager-697bc559fc-9pldz\" (UID: \"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.233667 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gk82\" (UniqueName: \"kubernetes.io/projected/d57e7a62-6958-4e64-98e6-a22857b00e32-kube-api-access-7gk82\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-7ftp5\" (UID: \"d57e7a62-6958-4e64-98e6-a22857b00e32\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.233715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xr4n\" (UniqueName: \"kubernetes.io/projected/59db1fe9-9d85-4346-8718-4e9139c8acb9-kube-api-access-8xr4n\") pod \"manila-operator-controller-manager-7c79b5df47-hrlqq\" (UID: \"59db1fe9-9d85-4346-8718-4e9139c8acb9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.265401 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.265602 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.285227 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.343331 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pwvqb" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.377179 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gk82\" (UniqueName: \"kubernetes.io/projected/d57e7a62-6958-4e64-98e6-a22857b00e32-kube-api-access-7gk82\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-7ftp5\" (UID: \"d57e7a62-6958-4e64-98e6-a22857b00e32\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.377896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xr4n\" (UniqueName: \"kubernetes.io/projected/59db1fe9-9d85-4346-8718-4e9139c8acb9-kube-api-access-8xr4n\") pod \"manila-operator-controller-manager-7c79b5df47-hrlqq\" (UID: \"59db1fe9-9d85-4346-8718-4e9139c8acb9\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.391874 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6btp\" (UniqueName: \"kubernetes.io/projected/379ff892-6dae-4b1b-9ae1-f6b7da9f4db6-kube-api-access-p6btp\") pod \"nova-operator-controller-manager-697bc559fc-9pldz\" (UID: \"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.403888 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhz7\" (UniqueName: \"kubernetes.io/projected/58a46d42-dade-4bfe-b9b0-bddac75f1d81-kube-api-access-zkhz7\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.404213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.404421 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppzk\" (UniqueName: \"kubernetes.io/projected/1655eb12-9c61-4959-9886-bd6f50b95292-kube-api-access-zppzk\") pod \"ovn-operator-controller-manager-b6456fdb6-xjjxv\" (UID: \"1655eb12-9c61-4959-9886-bd6f50b95292\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.404563 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrp6c\" (UniqueName: \"kubernetes.io/projected/5952221c-60d0-4159-bbd8-2adf2f1e3d8e-kube-api-access-xrp6c\") pod \"octavia-operator-controller-manager-998648c74-xj6hg\" (UID: \"5952221c-60d0-4159-bbd8-2adf2f1e3d8e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.424849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfl2\" (UniqueName: \"kubernetes.io/projected/491fb200-3ef9-4833-83c6-22b575b46998-kube-api-access-sqfl2\") pod \"mariadb-operator-controller-manager-56bbcc9d85-bfwb6\" (UID: \"491fb200-3ef9-4833-83c6-22b575b46998\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.450336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.490686 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.506994 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.508048 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.510885 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.515615 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.516931 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-slmcg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.518254 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-l46cz" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.528495 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.575274 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.585303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhz7\" (UniqueName: \"kubernetes.io/projected/58a46d42-dade-4bfe-b9b0-bddac75f1d81-kube-api-access-zkhz7\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.585358 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.585438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppzk\" (UniqueName: \"kubernetes.io/projected/1655eb12-9c61-4959-9886-bd6f50b95292-kube-api-access-zppzk\") pod \"ovn-operator-controller-manager-b6456fdb6-xjjxv\" (UID: \"1655eb12-9c61-4959-9886-bd6f50b95292\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.585465 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrp6c\" (UniqueName: \"kubernetes.io/projected/5952221c-60d0-4159-bbd8-2adf2f1e3d8e-kube-api-access-xrp6c\") pod \"octavia-operator-controller-manager-998648c74-xj6hg\" (UID: \"5952221c-60d0-4159-bbd8-2adf2f1e3d8e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.586720 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.586824 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:16.086792474 +0000 UTC m=+1008.977487907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.601344 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.613087 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.614539 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.615489 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhz7\" (UniqueName: \"kubernetes.io/projected/58a46d42-dade-4bfe-b9b0-bddac75f1d81-kube-api-access-zkhz7\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.615976 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-9vpst" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.616336 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrp6c\" (UniqueName: \"kubernetes.io/projected/5952221c-60d0-4159-bbd8-2adf2f1e3d8e-kube-api-access-xrp6c\") pod \"octavia-operator-controller-manager-998648c74-xj6hg\" (UID: \"5952221c-60d0-4159-bbd8-2adf2f1e3d8e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.616339 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppzk\" (UniqueName: \"kubernetes.io/projected/1655eb12-9c61-4959-9886-bd6f50b95292-kube-api-access-zppzk\") pod \"ovn-operator-controller-manager-b6456fdb6-xjjxv\" (UID: \"1655eb12-9c61-4959-9886-bd6f50b95292\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.621435 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.641673 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-58bfx"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.642814 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.646097 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vxd8w" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.656112 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-58bfx"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.659158 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.679611 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.680980 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.683567 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gjqjg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.684478 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.686048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxs5v\" (UniqueName: \"kubernetes.io/projected/6e6fc336-ee86-4c81-bbc7-76b241f4cffa-kube-api-access-zxs5v\") pod \"swift-operator-controller-manager-5f8c65bbfc-gbkkg\" (UID: \"6e6fc336-ee86-4c81-bbc7-76b241f4cffa\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.686160 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9thhf\" (UniqueName: \"kubernetes.io/projected/e0b4d539-a10d-4f94-8097-667df133713d-kube-api-access-9thhf\") pod \"placement-operator-controller-manager-78f8948974-vpdn7\" (UID: \"e0b4d539-a10d-4f94-8097-667df133713d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.686192 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.686332 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.686446 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:16.686373725 +0000 UTC m=+1009.577069158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.691971 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.698101 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.723680 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.724723 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.727713 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8g6fq" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.727936 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.728073 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.741016 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.758297 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.759291 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.764943 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-stxsk" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.765308 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.786925 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9thhf\" (UniqueName: \"kubernetes.io/projected/e0b4d539-a10d-4f94-8097-667df133713d-kube-api-access-9thhf\") pod \"placement-operator-controller-manager-78f8948974-vpdn7\" (UID: \"e0b4d539-a10d-4f94-8097-667df133713d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.787003 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799wc\" (UniqueName: \"kubernetes.io/projected/f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf-kube-api-access-799wc\") pod \"test-operator-controller-manager-5854674fcc-58bfx\" (UID: \"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.787031 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7n89\" (UniqueName: \"kubernetes.io/projected/b119316e-0e6a-43d8-a5e3-0068f099fad0-kube-api-access-h7n89\") pod \"watcher-operator-controller-manager-769dc69bc-xvq78\" (UID: \"b119316e-0e6a-43d8-a5e3-0068f099fad0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.787056 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxs5v\" (UniqueName: \"kubernetes.io/projected/6e6fc336-ee86-4c81-bbc7-76b241f4cffa-kube-api-access-zxs5v\") pod \"swift-operator-controller-manager-5f8c65bbfc-gbkkg\" (UID: \"6e6fc336-ee86-4c81-bbc7-76b241f4cffa\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.787086 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrks9\" (UniqueName: \"kubernetes.io/projected/785c9182-9230-4d64-9a16-81877ee4d03e-kube-api-access-lrks9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vxwfl\" (UID: \"785c9182-9230-4d64-9a16-81877ee4d03e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.819263 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9thhf\" (UniqueName: \"kubernetes.io/projected/e0b4d539-a10d-4f94-8097-667df133713d-kube-api-access-9thhf\") pod \"placement-operator-controller-manager-78f8948974-vpdn7\" (UID: \"e0b4d539-a10d-4f94-8097-667df133713d\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.830144 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxs5v\" (UniqueName: \"kubernetes.io/projected/6e6fc336-ee86-4c81-bbc7-76b241f4cffa-kube-api-access-zxs5v\") pod \"swift-operator-controller-manager-5f8c65bbfc-gbkkg\" (UID: \"6e6fc336-ee86-4c81-bbc7-76b241f4cffa\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.843900 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff"] Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.854916 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.864574 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889779 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrks9\" (UniqueName: \"kubernetes.io/projected/785c9182-9230-4d64-9a16-81877ee4d03e-kube-api-access-lrks9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vxwfl\" (UID: \"785c9182-9230-4d64-9a16-81877ee4d03e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889816 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45jjk\" (UniqueName: \"kubernetes.io/projected/a9c3ecf7-40b8-43a9-902d-0fe02be37037-kube-api-access-45jjk\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889864 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889897 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx96n\" (UniqueName: \"kubernetes.io/projected/9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3-kube-api-access-lx96n\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wpfh8\" (UID: \"9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799wc\" (UniqueName: \"kubernetes.io/projected/f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf-kube-api-access-799wc\") pod \"test-operator-controller-manager-5854674fcc-58bfx\" (UID: \"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.889994 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7n89\" (UniqueName: \"kubernetes.io/projected/b119316e-0e6a-43d8-a5e3-0068f099fad0-kube-api-access-h7n89\") pod \"watcher-operator-controller-manager-769dc69bc-xvq78\" (UID: \"b119316e-0e6a-43d8-a5e3-0068f099fad0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.911866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799wc\" (UniqueName: \"kubernetes.io/projected/f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf-kube-api-access-799wc\") pod \"test-operator-controller-manager-5854674fcc-58bfx\" (UID: \"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.913605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrks9\" (UniqueName: \"kubernetes.io/projected/785c9182-9230-4d64-9a16-81877ee4d03e-kube-api-access-lrks9\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vxwfl\" (UID: \"785c9182-9230-4d64-9a16-81877ee4d03e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.919248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7n89\" (UniqueName: \"kubernetes.io/projected/b119316e-0e6a-43d8-a5e3-0068f099fad0-kube-api-access-h7n89\") pod \"watcher-operator-controller-manager-769dc69bc-xvq78\" (UID: \"b119316e-0e6a-43d8-a5e3-0068f099fad0\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.920241 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.945913 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.967248 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.992295 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.992364 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx96n\" (UniqueName: \"kubernetes.io/projected/9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3-kube-api-access-lx96n\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wpfh8\" (UID: \"9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.992442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.992450 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.992508 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:16.492488608 +0000 UTC m=+1009.383184041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.992523 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45jjk\" (UniqueName: \"kubernetes.io/projected/a9c3ecf7-40b8-43a9-902d-0fe02be37037-kube-api-access-45jjk\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.992544 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: E1203 17:56:15.992577 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:16.49256492 +0000 UTC m=+1009.383260353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:15 crc kubenswrapper[4687]: I1203 17:56:15.997688 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.029625 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45jjk\" (UniqueName: \"kubernetes.io/projected/a9c3ecf7-40b8-43a9-902d-0fe02be37037-kube-api-access-45jjk\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.038842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx96n\" (UniqueName: \"kubernetes.io/projected/9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3-kube-api-access-lx96n\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wpfh8\" (UID: \"9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.094438 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.094626 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.094712 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:17.09468745 +0000 UTC m=+1009.985382893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.175347 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.324827 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.384031 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.391173 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.396036 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.401573 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.420587 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.447827 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.450998 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" event={"ID":"b63b97e0-be73-4e96-9904-9f5c030a0afb","Type":"ContainerStarted","Data":"1b9e1c7a1965167a69fbb480cd02c51e757e06028e0e782f957d63bc7373052a"} Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.454461 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.454795 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" event={"ID":"6fa88489-3c47-4369-9f87-a3f029f75a42","Type":"ContainerStarted","Data":"f49f58c7b4e8366423def52e9aa59eb546ea7f7f1594ad595a6dec802d779734"} Dec 03 17:56:16 crc kubenswrapper[4687]: W1203 17:56:16.455228 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod496e4d0a_a886_4d53_993c_66081d8843ae.slice/crio-4f9299d5e842094afcae04014b27621f4768fd80a780fa78a993f509d83be25f WatchSource:0}: Error finding container 4f9299d5e842094afcae04014b27621f4768fd80a780fa78a993f509d83be25f: Status 404 returned error can't find the container with id 4f9299d5e842094afcae04014b27621f4768fd80a780fa78a993f509d83be25f Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.456079 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" event={"ID":"0e3acf7a-4766-4f89-9f70-d5ec2690318b","Type":"ContainerStarted","Data":"40c4f570ba140826f2f971c0a6d7e4eb60e1a8c4f79ec0330001ff22d55c74c3"} Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.457109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" event={"ID":"d3d2df8d-6f3d-4f5d-afd3-cef00553188e","Type":"ContainerStarted","Data":"029ce3f56272176062a31a593161529d36d03882834b7505629208f6a42852d9"} Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.517788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.518049 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.518254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.518582 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.518620 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:17.518568305 +0000 UTC m=+1010.409263738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.518696 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:17.518671868 +0000 UTC m=+1010.409367301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.628917 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.642088 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.721560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.721741 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.721827 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:18.721806158 +0000 UTC m=+1011.612501581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.818745 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.832352 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.843633 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv"] Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.855159 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zppzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xjjxv_openstack-operators(1655eb12-9c61-4959-9886-bd6f50b95292): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.868077 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zppzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-xjjxv_openstack-operators(1655eb12-9c61-4959-9886-bd6f50b95292): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.869272 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" podUID="1655eb12-9c61-4959-9886-bd6f50b95292" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.914691 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7"] Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.941969 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9thhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-vpdn7_openstack-operators(e0b4d539-a10d-4f94-8097-667df133713d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.945711 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9thhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-vpdn7_openstack-operators(e0b4d539-a10d-4f94-8097-667df133713d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.948166 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" podUID="e0b4d539-a10d-4f94-8097-667df133713d" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.950243 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8"] Dec 03 17:56:16 crc kubenswrapper[4687]: W1203 17:56:16.954429 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c5e71f4_be0f_4da7_8d14_bb46cc12c5b3.slice/crio-e8dcd2935ea0ac72949a1f001e26cc59360e3193b65da2290b5b16e0dfd59952 WatchSource:0}: Error finding container e8dcd2935ea0ac72949a1f001e26cc59360e3193b65da2290b5b16e0dfd59952: Status 404 returned error can't find the container with id e8dcd2935ea0ac72949a1f001e26cc59360e3193b65da2290b5b16e0dfd59952 Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.960827 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lx96n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wpfh8_openstack-operators(9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.964274 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" podUID="9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.964512 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrks9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-vxwfl_openstack-operators(785c9182-9230-4d64-9a16-81877ee4d03e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.971226 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78"] Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.971907 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrks9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-vxwfl_openstack-operators(785c9182-9230-4d64-9a16-81877ee4d03e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.973346 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-799wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-58bfx_openstack-operators(f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.973482 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h7n89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-xvq78_openstack-operators(b119316e-0e6a-43d8-a5e3-0068f099fad0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.973583 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" podUID="785c9182-9230-4d64-9a16-81877ee4d03e" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.981651 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-799wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-58bfx_openstack-operators(f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.982752 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl"] Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.982789 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" podUID="f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.983709 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxs5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-gbkkg_openstack-operators(6e6fc336-ee86-4c81-bbc7-76b241f4cffa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.985561 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxs5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-gbkkg_openstack-operators(6e6fc336-ee86-4c81-bbc7-76b241f4cffa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 17:56:16 crc kubenswrapper[4687]: E1203 17:56:16.986739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" podUID="6e6fc336-ee86-4c81-bbc7-76b241f4cffa" Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.988217 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg"] Dec 03 17:56:16 crc kubenswrapper[4687]: I1203 17:56:16.993671 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-58bfx"] Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.130639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.130823 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.130925 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:19.130899803 +0000 UTC m=+1012.021595296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.466897 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" event={"ID":"59db1fe9-9d85-4346-8718-4e9139c8acb9","Type":"ContainerStarted","Data":"94327bdb22664cfedc47ada2c3b968d82d51b71c393f23336625f54bed04b938"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.472027 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" event={"ID":"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b","Type":"ContainerStarted","Data":"c1c08a5e4af73455068910679b53e176b595b9e65c0d345df5a26f32336d9aff"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.473272 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" event={"ID":"5952221c-60d0-4159-bbd8-2adf2f1e3d8e","Type":"ContainerStarted","Data":"870bd0ad02e19e26674188c76fb52da84c7d7bd4af2f2eb37ebaf238c243127a"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.474346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" event={"ID":"496e4d0a-a886-4d53-993c-66081d8843ae","Type":"ContainerStarted","Data":"4f9299d5e842094afcae04014b27621f4768fd80a780fa78a993f509d83be25f"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.475165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" event={"ID":"b119316e-0e6a-43d8-a5e3-0068f099fad0","Type":"ContainerStarted","Data":"bf1730794aaca80de442b4b09a5cb0b3989d147c687de03a3b37051ae3f69f1d"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.475952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" event={"ID":"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6","Type":"ContainerStarted","Data":"634e1f4903b32b1b4383acd668816162a9a851dc85a20a44a3256ff4ccf69d10"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.478662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" event={"ID":"491fb200-3ef9-4833-83c6-22b575b46998","Type":"ContainerStarted","Data":"cc90c040160957ab79127fb9e2ffe0875e61f9515669a9a247111488d76ffd73"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.481173 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" event={"ID":"6e6fc336-ee86-4c81-bbc7-76b241f4cffa","Type":"ContainerStarted","Data":"212ea0df885e4e151541c382a767ff47ce766e69004ba78c53bb1670bdd0485e"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.484935 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" event={"ID":"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf","Type":"ContainerStarted","Data":"d2eaa7e38f3e41569e4851212556fc1a8219c2d0e6dca25f01d2215cd50826c7"} Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.485297 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" podUID="6e6fc336-ee86-4c81-bbc7-76b241f4cffa" Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.486783 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" podUID="f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf" Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.487261 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" event={"ID":"1655eb12-9c61-4959-9886-bd6f50b95292","Type":"ContainerStarted","Data":"9f8e834ab487207bb5585cd9fa2abc427cbc77c46669722d474d80a6df86b75e"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.489180 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" event={"ID":"d57e7a62-6958-4e64-98e6-a22857b00e32","Type":"ContainerStarted","Data":"918c45309db2820a46ffb7cbd062c3924b4959987c904f59f316084ad12f9fb0"} Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.495549 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" podUID="1655eb12-9c61-4959-9886-bd6f50b95292" Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.498989 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" event={"ID":"e0b4d539-a10d-4f94-8097-667df133713d","Type":"ContainerStarted","Data":"3f6a4c686a12464e789cd7a668454db3081aa161688a14e33e00ffd7db9c875c"} Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.502577 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" podUID="e0b4d539-a10d-4f94-8097-667df133713d" Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.502608 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" event={"ID":"e48eab37-9bd2-4f8d-892a-4436c68bab21","Type":"ContainerStarted","Data":"b2c6c6978991e9f2092fb91a6ee4298ff34ba79729f40339339d0efabe2d5fe5"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.504050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" event={"ID":"f7046b74-0868-4ee1-b917-56e695a94d16","Type":"ContainerStarted","Data":"a16ed36b39e35ff22543e2f0091802e745322323a352cf58665bfa0dfd66a5fd"} Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.512269 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" event={"ID":"9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3","Type":"ContainerStarted","Data":"e8dcd2935ea0ac72949a1f001e26cc59360e3193b65da2290b5b16e0dfd59952"} Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.513798 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" podUID="9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3" Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.514177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" event={"ID":"785c9182-9230-4d64-9a16-81877ee4d03e","Type":"ContainerStarted","Data":"d79a8f99ca027584093999336ffffb9311135e87c019e407608055685e3e31d0"} Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.515229 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" podUID="785c9182-9230-4d64-9a16-81877ee4d03e" Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.536814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.536967 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.537029 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:19.537009468 +0000 UTC m=+1012.427704901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:17 crc kubenswrapper[4687]: I1203 17:56:17.537739 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.541594 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:17 crc kubenswrapper[4687]: E1203 17:56:17.541660 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:19.541639742 +0000 UTC m=+1012.432335175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.543565 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" podUID="9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.545685 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" podUID="1655eb12-9c61-4959-9886-bd6f50b95292" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.545776 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" podUID="e0b4d539-a10d-4f94-8097-667df133713d" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.545826 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" podUID="785c9182-9230-4d64-9a16-81877ee4d03e" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.545930 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" podUID="6e6fc336-ee86-4c81-bbc7-76b241f4cffa" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.566162 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" podUID="f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf" Dec 03 17:56:18 crc kubenswrapper[4687]: I1203 17:56:18.771033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.771362 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:18 crc kubenswrapper[4687]: E1203 17:56:18.771448 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:22.771416837 +0000 UTC m=+1015.662112270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: I1203 17:56:19.175459 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.175632 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.175807 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:23.175788655 +0000 UTC m=+1016.066484088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: I1203 17:56:19.585469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:19 crc kubenswrapper[4687]: I1203 17:56:19.585571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.585655 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.585726 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.585784 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:23.585763253 +0000 UTC m=+1016.476458686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:19 crc kubenswrapper[4687]: E1203 17:56:19.600533 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:23.600486612 +0000 UTC m=+1016.491182045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:22 crc kubenswrapper[4687]: I1203 17:56:22.840062 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:22 crc kubenswrapper[4687]: E1203 17:56:22.840294 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:22 crc kubenswrapper[4687]: E1203 17:56:22.840524 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:30.840497961 +0000 UTC m=+1023.731193394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: I1203 17:56:23.246756 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.246944 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.247038 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:31.247012556 +0000 UTC m=+1024.137707989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: I1203 17:56:23.652823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:23 crc kubenswrapper[4687]: I1203 17:56:23.652927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.653046 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.653089 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.653162 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:31.653132281 +0000 UTC m=+1024.543827794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:23 crc kubenswrapper[4687]: E1203 17:56:23.653183 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:31.653176002 +0000 UTC m=+1024.543871435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:29 crc kubenswrapper[4687]: E1203 17:56:29.691318 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 03 17:56:29 crc kubenswrapper[4687]: E1203 17:56:29.691899 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tcfxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-sdbgv_openstack-operators(0e3acf7a-4766-4f89-9f70-d5ec2690318b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:56:30 crc kubenswrapper[4687]: I1203 17:56:30.869299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:30 crc kubenswrapper[4687]: E1203 17:56:30.869565 4687 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:30 crc kubenswrapper[4687]: E1203 17:56:30.869932 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert podName:6abb698e-8c6d-40c8-b87d-dcd828bba5d3 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:46.86990801 +0000 UTC m=+1039.760603513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert") pod "infra-operator-controller-manager-57548d458d-lx2md" (UID: "6abb698e-8c6d-40c8-b87d-dcd828bba5d3") : secret "infra-operator-webhook-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.093374 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" podUID="0e3acf7a-4766-4f89-9f70-d5ec2690318b" Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.198388 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" podUID="b119316e-0e6a-43d8-a5e3-0068f099fad0" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.275926 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.276049 4687 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.276093 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert podName:58a46d42-dade-4bfe-b9b0-bddac75f1d81 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:47.276079226 +0000 UTC m=+1040.166774659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" (UID: "58a46d42-dade-4bfe-b9b0-bddac75f1d81") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.686021 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.686168 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.686235 4687 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.686298 4687 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.686341 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:47.686318813 +0000 UTC m=+1040.577014336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "metrics-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.686387 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs podName:a9c3ecf7-40b8-43a9-902d-0fe02be37037 nodeName:}" failed. No retries permitted until 2025-12-03 17:56:47.686376154 +0000 UTC m=+1040.577071687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs") pod "openstack-operator-controller-manager-65f8659594-f2bcj" (UID: "a9c3ecf7-40b8-43a9-902d-0fe02be37037") : secret "webhook-server-cert" not found Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.693014 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" event={"ID":"d57e7a62-6958-4e64-98e6-a22857b00e32","Type":"ContainerStarted","Data":"32a38e7f1af24d95af294546da04abe71284200a7cb47c6790a9ae10a7b79ee3"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.707775 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" event={"ID":"5952221c-60d0-4159-bbd8-2adf2f1e3d8e","Type":"ContainerStarted","Data":"9b9738f580ef06b8013520cd23490a85813c764762a91514a674dceb59c28964"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.707827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" event={"ID":"5952221c-60d0-4159-bbd8-2adf2f1e3d8e","Type":"ContainerStarted","Data":"7603c5e8c1fd8af872daea177f0fdd374df5ca3f605c99202d49a400c87bf490"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.708843 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.722916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" event={"ID":"d3d2df8d-6f3d-4f5d-afd3-cef00553188e","Type":"ContainerStarted","Data":"67aac6df5c5e5ee4e52582b2eb133e1618eedbc8a55a5d6cf1efa622afdc1f5a"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.734417 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" event={"ID":"b63b97e0-be73-4e96-9904-9f5c030a0afb","Type":"ContainerStarted","Data":"7a6ef43cc67babe3726c488c0de6f2ff311cdc6e26c2211f232bc836de8faabe"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.744165 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" event={"ID":"f7046b74-0868-4ee1-b917-56e695a94d16","Type":"ContainerStarted","Data":"6511c4c3075cb5f9d743ef75e08ea337e50186c3f3bcf0b47feb1ac62fa12220"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.744205 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" event={"ID":"f7046b74-0868-4ee1-b917-56e695a94d16","Type":"ContainerStarted","Data":"16846fedf4fae6d1cba6c6b7ed4d784f3ab5c19794a92674f2aaf3c07f5781b1"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.748508 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" podStartSLOduration=4.225444692 podStartE2EDuration="17.748497613s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.831670156 +0000 UTC m=+1009.722365589" lastFinishedPulling="2025-12-03 17:56:30.354723077 +0000 UTC m=+1023.245418510" observedRunningTime="2025-12-03 17:56:31.745536823 +0000 UTC m=+1024.636232256" watchObservedRunningTime="2025-12-03 17:56:31.748497613 +0000 UTC m=+1024.639193046" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.766613 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" event={"ID":"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b","Type":"ContainerStarted","Data":"d8445f8ccd173cfab3fd4156413aa820a0982578525b38e298f7d487877d08e1"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.766662 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" event={"ID":"e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b","Type":"ContainerStarted","Data":"d0bf0b22cd1624027f463b713fa2d51c5a85a10597e6a9dddbd3b467e2c7f3ec"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.767591 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.789649 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" event={"ID":"e48eab37-9bd2-4f8d-892a-4436c68bab21","Type":"ContainerStarted","Data":"e6dc2e5a591aeb7395816d6e04e16c199656003ffb602d76ab5524d390a20a9f"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.802102 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" podStartSLOduration=3.9065402320000002 podStartE2EDuration="17.80208178s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.459088847 +0000 UTC m=+1009.349784280" lastFinishedPulling="2025-12-03 17:56:30.354630395 +0000 UTC m=+1023.245325828" observedRunningTime="2025-12-03 17:56:31.797170108 +0000 UTC m=+1024.687865541" watchObservedRunningTime="2025-12-03 17:56:31.80208178 +0000 UTC m=+1024.692777213" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.806680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" event={"ID":"0e3acf7a-4766-4f89-9f70-d5ec2690318b","Type":"ContainerStarted","Data":"d0223bd87e50d13654a9d7612516190a9c9c8d1c395577afec9b4a4064305865"} Dec 03 17:56:31 crc kubenswrapper[4687]: E1203 17:56:31.809156 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" podUID="0e3acf7a-4766-4f89-9f70-d5ec2690318b" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.834535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" event={"ID":"496e4d0a-a886-4d53-993c-66081d8843ae","Type":"ContainerStarted","Data":"189b9aae52be9c34d49515d258744e74ff6ba4210297c58284f8d06ecf0dac06"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.853625 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" event={"ID":"491fb200-3ef9-4833-83c6-22b575b46998","Type":"ContainerStarted","Data":"86ec98dd86a9a2a277960ca1c858b7773d9379f3609da7620a3a68fe851efbc9"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.853669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" event={"ID":"491fb200-3ef9-4833-83c6-22b575b46998","Type":"ContainerStarted","Data":"6256eb346529113e8ad3c56bf488db16a9afd5613a42a8e070468d36eb56d914"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.854294 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.872540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" event={"ID":"6fa88489-3c47-4369-9f87-a3f029f75a42","Type":"ContainerStarted","Data":"1f94246039115e7346642914ef95d462e8be3cff7e434fe6e05df0961ff8e813"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.886643 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" event={"ID":"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6","Type":"ContainerStarted","Data":"b52df69b3f848c6f746640260297073ab9dfe2cf9acf78f4c5ed9133d21f4a91"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.906520 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" event={"ID":"b119316e-0e6a-43d8-a5e3-0068f099fad0","Type":"ContainerStarted","Data":"c3ec5c41a9ccb47e37405499dd44b2f0388a1307fcde2c0670be57b1038ab52a"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.925085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" event={"ID":"59db1fe9-9d85-4346-8718-4e9139c8acb9","Type":"ContainerStarted","Data":"5ec49fce89b84dfcc013459a68e766f79928405b4cc2951e2328668f16fddf70"} Dec 03 17:56:31 crc kubenswrapper[4687]: I1203 17:56:31.995017 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" podStartSLOduration=4.507709499 podStartE2EDuration="17.994996964s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.854948645 +0000 UTC m=+1009.745644078" lastFinishedPulling="2025-12-03 17:56:30.34223611 +0000 UTC m=+1023.232931543" observedRunningTime="2025-12-03 17:56:31.957600163 +0000 UTC m=+1024.848295606" watchObservedRunningTime="2025-12-03 17:56:31.994996964 +0000 UTC m=+1024.885692397" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.942420 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" event={"ID":"379ff892-6dae-4b1b-9ae1-f6b7da9f4db6","Type":"ContainerStarted","Data":"51d6b4cee58bd5a82754d597d28e8e1c4ed949a3ba40269a3d6b0c6c6a88b906"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.942724 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.948695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" event={"ID":"e48eab37-9bd2-4f8d-892a-4436c68bab21","Type":"ContainerStarted","Data":"3f7b477ba68c934391eae55c5f8a3f75cf33c55c7129e114cdd87d61286f4f40"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.948741 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.952395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" event={"ID":"d3d2df8d-6f3d-4f5d-afd3-cef00553188e","Type":"ContainerStarted","Data":"030cf2be3a6939218b7728a883832081f45040cf9e6aef315d3b9a75d37e1e4b"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.952830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.954843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" event={"ID":"b63b97e0-be73-4e96-9904-9f5c030a0afb","Type":"ContainerStarted","Data":"ceac3174f82ed6dbe4059585acb6134f0e4d02e2f51f79adb4ddd1d07f42e34c"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.955631 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.957250 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" event={"ID":"496e4d0a-a886-4d53-993c-66081d8843ae","Type":"ContainerStarted","Data":"81b3aa1de3c1d10bdd630aecdd4a4bc2107eb463b9ea13705f5bfa7d74616c0e"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.957610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.964222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" event={"ID":"59db1fe9-9d85-4346-8718-4e9139c8acb9","Type":"ContainerStarted","Data":"b0ef9838c4a439b1b08ec909e6e3eda8d0332033b4151426b369c4fe95a91413"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.964795 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.965000 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" podStartSLOduration=5.232493536 podStartE2EDuration="18.964977857s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.656006289 +0000 UTC m=+1009.546701722" lastFinishedPulling="2025-12-03 17:56:30.38849061 +0000 UTC m=+1023.279186043" observedRunningTime="2025-12-03 17:56:32.962542091 +0000 UTC m=+1025.853237524" watchObservedRunningTime="2025-12-03 17:56:32.964977857 +0000 UTC m=+1025.855673290" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.972137 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" event={"ID":"d57e7a62-6958-4e64-98e6-a22857b00e32","Type":"ContainerStarted","Data":"5d179f110cf6055851df157089cf97883b3b13d13ebabf28b34f9163bd909324"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.972825 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.975311 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" event={"ID":"6fa88489-3c47-4369-9f87-a3f029f75a42","Type":"ContainerStarted","Data":"9591bf9164feec36e4fe2ff68677cc7d888fbcc94a9c33ebdf2f3f8e9f9d45eb"} Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.975342 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.982243 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:32 crc kubenswrapper[4687]: E1203 17:56:32.984592 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" podUID="0e3acf7a-4766-4f89-9f70-d5ec2690318b" Dec 03 17:56:32 crc kubenswrapper[4687]: I1203 17:56:32.986536 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" podStartSLOduration=5.046333845 podStartE2EDuration="18.986518769s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.428361227 +0000 UTC m=+1009.319056660" lastFinishedPulling="2025-12-03 17:56:30.368546161 +0000 UTC m=+1023.259241584" observedRunningTime="2025-12-03 17:56:32.985013108 +0000 UTC m=+1025.875708541" watchObservedRunningTime="2025-12-03 17:56:32.986518769 +0000 UTC m=+1025.877214202" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.013526 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" podStartSLOduration=5.125524246 podStartE2EDuration="19.013510279s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.481194485 +0000 UTC m=+1009.371889918" lastFinishedPulling="2025-12-03 17:56:30.369180528 +0000 UTC m=+1023.259875951" observedRunningTime="2025-12-03 17:56:33.012579034 +0000 UTC m=+1025.903274467" watchObservedRunningTime="2025-12-03 17:56:33.013510279 +0000 UTC m=+1025.904205712" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.037053 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" podStartSLOduration=5.057363744 podStartE2EDuration="19.037034304s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.375697244 +0000 UTC m=+1009.266392677" lastFinishedPulling="2025-12-03 17:56:30.355367804 +0000 UTC m=+1023.246063237" observedRunningTime="2025-12-03 17:56:33.035923255 +0000 UTC m=+1025.926618688" watchObservedRunningTime="2025-12-03 17:56:33.037034304 +0000 UTC m=+1025.927729737" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.062935 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" podStartSLOduration=5.205386675 podStartE2EDuration="19.062909604s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.49802004 +0000 UTC m=+1009.388715483" lastFinishedPulling="2025-12-03 17:56:30.355542989 +0000 UTC m=+1023.246238412" observedRunningTime="2025-12-03 17:56:33.058618648 +0000 UTC m=+1025.949314101" watchObservedRunningTime="2025-12-03 17:56:33.062909604 +0000 UTC m=+1025.953605047" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.084473 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" podStartSLOduration=4.638041441 podStartE2EDuration="19.084440005s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:15.922794104 +0000 UTC m=+1008.813489537" lastFinishedPulling="2025-12-03 17:56:30.369192668 +0000 UTC m=+1023.259888101" observedRunningTime="2025-12-03 17:56:33.081766943 +0000 UTC m=+1025.972462386" watchObservedRunningTime="2025-12-03 17:56:33.084440005 +0000 UTC m=+1025.975135438" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.103391 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" podStartSLOduration=5.21372654 podStartE2EDuration="19.103372047s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.46508211 +0000 UTC m=+1009.355777543" lastFinishedPulling="2025-12-03 17:56:30.354727617 +0000 UTC m=+1023.245423050" observedRunningTime="2025-12-03 17:56:33.097331513 +0000 UTC m=+1025.988026946" watchObservedRunningTime="2025-12-03 17:56:33.103372047 +0000 UTC m=+1025.994067480" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.130207 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" podStartSLOduration=5.246380142 podStartE2EDuration="19.130186052s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.47178677 +0000 UTC m=+1009.362482203" lastFinishedPulling="2025-12-03 17:56:30.35559268 +0000 UTC m=+1023.246288113" observedRunningTime="2025-12-03 17:56:33.125251558 +0000 UTC m=+1026.015946991" watchObservedRunningTime="2025-12-03 17:56:33.130186052 +0000 UTC m=+1026.020881505" Dec 03 17:56:33 crc kubenswrapper[4687]: I1203 17:56:33.163884 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" podStartSLOduration=5.464230609 podStartE2EDuration="19.163865322s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.655039453 +0000 UTC m=+1009.545734886" lastFinishedPulling="2025-12-03 17:56:30.354674166 +0000 UTC m=+1023.245369599" observedRunningTime="2025-12-03 17:56:33.159866534 +0000 UTC m=+1026.050561967" watchObservedRunningTime="2025-12-03 17:56:33.163865322 +0000 UTC m=+1026.054560755" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.020670 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-6xgff" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.075584 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-h6x45" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.209824 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-mzvdw" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.453720 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-7ftp5" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.663067 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-hrlqq" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.694725 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-bfwb6" Dec 03 17:56:35 crc kubenswrapper[4687]: I1203 17:56:35.856827 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-xj6hg" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.026322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" event={"ID":"e0b4d539-a10d-4f94-8097-667df133713d","Type":"ContainerStarted","Data":"2d44dfc2b108ebc6f5b66143684b446781746c78ad1b911716351bb3367a7ab7"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.027112 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" event={"ID":"e0b4d539-a10d-4f94-8097-667df133713d","Type":"ContainerStarted","Data":"0feab5737456609a69197a818782e203b6d33b0d5296db3f5f1ff9f91dafc5f6"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.027329 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.028092 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" event={"ID":"9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3","Type":"ContainerStarted","Data":"1123b94d012ad1f48024e4daac2872bd74be50c4f2a8c9fd709f08236c8a84ff"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.029540 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" event={"ID":"6e6fc336-ee86-4c81-bbc7-76b241f4cffa","Type":"ContainerStarted","Data":"3c97f6904d1dda6511bc878d94b18e3014878e3514d84bcf4cc477c15f2ae66c"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.029566 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" event={"ID":"6e6fc336-ee86-4c81-bbc7-76b241f4cffa","Type":"ContainerStarted","Data":"6f94979dc42b1b7089a9abacfa8ce4413064b7ecadbacd5c93c860db53a43a7b"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.029727 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.030989 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" event={"ID":"b119316e-0e6a-43d8-a5e3-0068f099fad0","Type":"ContainerStarted","Data":"29ebbf799d8d437769350e86cf73fd3e3500bc04803c7a9bc3c3d1df731361e9"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.031074 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.032270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" event={"ID":"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf","Type":"ContainerStarted","Data":"bc235c5ae3d5162bb2f788ac511db0670e97ee1c06aac482bbb85f7cceeee470"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.032315 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" event={"ID":"f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf","Type":"ContainerStarted","Data":"82cf93db81c3da8291a0e6ac53da03c6b8cd37fe3c0da9e84a7af60121f4028a"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.032460 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.033484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" event={"ID":"785c9182-9230-4d64-9a16-81877ee4d03e","Type":"ContainerStarted","Data":"d4b378080904e9ecb5cadf5b5c9da30c35c1637f5adfd9839b32e641cfe97975"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.033513 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" event={"ID":"785c9182-9230-4d64-9a16-81877ee4d03e","Type":"ContainerStarted","Data":"5e7d382bd96af28da1e241002509de31e217f6808247a69a13aace02f1368c8a"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.033655 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.034935 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" event={"ID":"1655eb12-9c61-4959-9886-bd6f50b95292","Type":"ContainerStarted","Data":"2823b33871121fa52b1663c4861ed4add5f6fe01cf2773635b907f2eeaf03ed4"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.034973 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" event={"ID":"1655eb12-9c61-4959-9886-bd6f50b95292","Type":"ContainerStarted","Data":"a9a367acc9a8a91ca28f28ed5209faecdcf92aeaeab9242524c51dc309ab3e26"} Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.035136 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.046001 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" podStartSLOduration=3.830396927 podStartE2EDuration="27.045983931s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.941825313 +0000 UTC m=+1009.832520746" lastFinishedPulling="2025-12-03 17:56:40.157412317 +0000 UTC m=+1033.048107750" observedRunningTime="2025-12-03 17:56:41.042493526 +0000 UTC m=+1033.933188959" watchObservedRunningTime="2025-12-03 17:56:41.045983931 +0000 UTC m=+1033.936679364" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.056543 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wpfh8" podStartSLOduration=2.761699361 podStartE2EDuration="26.056522965s" podCreationTimestamp="2025-12-03 17:56:15 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.960692994 +0000 UTC m=+1009.851388427" lastFinishedPulling="2025-12-03 17:56:40.255516588 +0000 UTC m=+1033.146212031" observedRunningTime="2025-12-03 17:56:41.054574672 +0000 UTC m=+1033.945270115" watchObservedRunningTime="2025-12-03 17:56:41.056522965 +0000 UTC m=+1033.947218398" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.067641 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" podStartSLOduration=2.883609014 podStartE2EDuration="26.067623145s" podCreationTimestamp="2025-12-03 17:56:15 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.973412077 +0000 UTC m=+1009.864107500" lastFinishedPulling="2025-12-03 17:56:40.157426198 +0000 UTC m=+1033.048121631" observedRunningTime="2025-12-03 17:56:41.067357738 +0000 UTC m=+1033.958053171" watchObservedRunningTime="2025-12-03 17:56:41.067623145 +0000 UTC m=+1033.958318578" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.084389 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" podStartSLOduration=3.839903583 podStartE2EDuration="27.084370007s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.983600502 +0000 UTC m=+1009.874295935" lastFinishedPulling="2025-12-03 17:56:40.228066926 +0000 UTC m=+1033.118762359" observedRunningTime="2025-12-03 17:56:41.083454922 +0000 UTC m=+1033.974150375" watchObservedRunningTime="2025-12-03 17:56:41.084370007 +0000 UTC m=+1033.975065440" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.107703 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" podStartSLOduration=3.323558254 podStartE2EDuration="26.107682028s" podCreationTimestamp="2025-12-03 17:56:15 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.964352752 +0000 UTC m=+1009.855048185" lastFinishedPulling="2025-12-03 17:56:39.748476526 +0000 UTC m=+1032.639171959" observedRunningTime="2025-12-03 17:56:41.104967434 +0000 UTC m=+1033.995662867" watchObservedRunningTime="2025-12-03 17:56:41.107682028 +0000 UTC m=+1033.998377461" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.131528 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" podStartSLOduration=2.942959739 podStartE2EDuration="26.131510572s" podCreationTimestamp="2025-12-03 17:56:15 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.973233232 +0000 UTC m=+1009.863928665" lastFinishedPulling="2025-12-03 17:56:40.161784055 +0000 UTC m=+1033.052479498" observedRunningTime="2025-12-03 17:56:41.125327544 +0000 UTC m=+1034.016022977" watchObservedRunningTime="2025-12-03 17:56:41.131510572 +0000 UTC m=+1034.022206015" Dec 03 17:56:41 crc kubenswrapper[4687]: I1203 17:56:41.150076 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" podStartSLOduration=4.256481941 podStartE2EDuration="27.150055113s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.855000787 +0000 UTC m=+1009.745696220" lastFinishedPulling="2025-12-03 17:56:39.748573939 +0000 UTC m=+1032.639269392" observedRunningTime="2025-12-03 17:56:41.142267122 +0000 UTC m=+1034.032962555" watchObservedRunningTime="2025-12-03 17:56:41.150055113 +0000 UTC m=+1034.040750546" Dec 03 17:56:44 crc kubenswrapper[4687]: I1203 17:56:44.906731 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-zmzr6" Dec 03 17:56:44 crc kubenswrapper[4687]: I1203 17:56:44.933842 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-fn5xb" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.017564 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-nwzp4" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.203923 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-fcwrt" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.495823 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9pldz" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.687976 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-xjjxv" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.867726 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-vpdn7" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.923299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-gbkkg" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.949526 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vxwfl" Dec 03 17:56:45 crc kubenswrapper[4687]: I1203 17:56:45.970591 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-58bfx" Dec 03 17:56:46 crc kubenswrapper[4687]: I1203 17:56:46.001748 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-xvq78" Dec 03 17:56:46 crc kubenswrapper[4687]: I1203 17:56:46.912393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:46 crc kubenswrapper[4687]: I1203 17:56:46.917954 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6abb698e-8c6d-40c8-b87d-dcd828bba5d3-cert\") pod \"infra-operator-controller-manager-57548d458d-lx2md\" (UID: \"6abb698e-8c6d-40c8-b87d-dcd828bba5d3\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:46 crc kubenswrapper[4687]: I1203 17:56:46.933616 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nwzck" Dec 03 17:56:46 crc kubenswrapper[4687]: I1203 17:56:46.942571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.323445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.329102 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58a46d42-dade-4bfe-b9b0-bddac75f1d81-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w\" (UID: \"58a46d42-dade-4bfe-b9b0-bddac75f1d81\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.353463 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-lx2md"] Dec 03 17:56:47 crc kubenswrapper[4687]: W1203 17:56:47.358536 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6abb698e_8c6d_40c8_b87d_dcd828bba5d3.slice/crio-bad8996d5dc281681138d437d391b3ff23d2e1931a01440706f26b0763ab2cfd WatchSource:0}: Error finding container bad8996d5dc281681138d437d391b3ff23d2e1931a01440706f26b0763ab2cfd: Status 404 returned error can't find the container with id bad8996d5dc281681138d437d391b3ff23d2e1931a01440706f26b0763ab2cfd Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.495984 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pwvqb" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.505520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.730142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.730482 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.734285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-metrics-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.734881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9c3ecf7-40b8-43a9-902d-0fe02be37037-webhook-certs\") pod \"openstack-operator-controller-manager-65f8659594-f2bcj\" (UID: \"a9c3ecf7-40b8-43a9-902d-0fe02be37037\") " pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.842529 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8g6fq" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.850870 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:47 crc kubenswrapper[4687]: I1203 17:56:47.927989 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w"] Dec 03 17:56:47 crc kubenswrapper[4687]: W1203 17:56:47.943414 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58a46d42_dade_4bfe_b9b0_bddac75f1d81.slice/crio-347d010a3c64bc68d0c5ade8ac3b5f6219902f390acf78833c9076640107180f WatchSource:0}: Error finding container 347d010a3c64bc68d0c5ade8ac3b5f6219902f390acf78833c9076640107180f: Status 404 returned error can't find the container with id 347d010a3c64bc68d0c5ade8ac3b5f6219902f390acf78833c9076640107180f Dec 03 17:56:48 crc kubenswrapper[4687]: I1203 17:56:48.079146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" event={"ID":"6abb698e-8c6d-40c8-b87d-dcd828bba5d3","Type":"ContainerStarted","Data":"bad8996d5dc281681138d437d391b3ff23d2e1931a01440706f26b0763ab2cfd"} Dec 03 17:56:48 crc kubenswrapper[4687]: I1203 17:56:48.080098 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" event={"ID":"58a46d42-dade-4bfe-b9b0-bddac75f1d81","Type":"ContainerStarted","Data":"347d010a3c64bc68d0c5ade8ac3b5f6219902f390acf78833c9076640107180f"} Dec 03 17:56:48 crc kubenswrapper[4687]: I1203 17:56:48.300324 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj"] Dec 03 17:56:49 crc kubenswrapper[4687]: I1203 17:56:49.089736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" event={"ID":"a9c3ecf7-40b8-43a9-902d-0fe02be37037","Type":"ContainerStarted","Data":"e9eb59cabf02c116ad3d85e6811644f76c382099dcd1ab2dce38be8ebe6022e8"} Dec 03 17:56:49 crc kubenswrapper[4687]: I1203 17:56:49.089807 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" event={"ID":"a9c3ecf7-40b8-43a9-902d-0fe02be37037","Type":"ContainerStarted","Data":"0b691d32cfbf15517615ac333fe7308e137827669e5927be2a433700f386b4fc"} Dec 03 17:56:49 crc kubenswrapper[4687]: I1203 17:56:49.089940 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:56:49 crc kubenswrapper[4687]: I1203 17:56:49.132406 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" podStartSLOduration=34.132392356 podStartE2EDuration="34.132392356s" podCreationTimestamp="2025-12-03 17:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:56:49.131735858 +0000 UTC m=+1042.022431301" watchObservedRunningTime="2025-12-03 17:56:49.132392356 +0000 UTC m=+1042.023087789" Dec 03 17:56:53 crc kubenswrapper[4687]: I1203 17:56:53.129732 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" event={"ID":"0e3acf7a-4766-4f89-9f70-d5ec2690318b","Type":"ContainerStarted","Data":"1b95e268a933b43dcefb0b74fb28eb311472200a792383c71ad2a13e47718e1f"} Dec 03 17:56:53 crc kubenswrapper[4687]: I1203 17:56:53.130557 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:56:53 crc kubenswrapper[4687]: I1203 17:56:53.161365 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" podStartSLOduration=3.367575781 podStartE2EDuration="39.16132728s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:16.429175729 +0000 UTC m=+1009.319871162" lastFinishedPulling="2025-12-03 17:56:52.222927238 +0000 UTC m=+1045.113622661" observedRunningTime="2025-12-03 17:56:53.146886671 +0000 UTC m=+1046.037582104" watchObservedRunningTime="2025-12-03 17:56:53.16132728 +0000 UTC m=+1046.052022713" Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.137356 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" event={"ID":"6abb698e-8c6d-40c8-b87d-dcd828bba5d3","Type":"ContainerStarted","Data":"9e1745f218dedef05d29b6f41d1cb85bee8c84c11418b4e90cb764de1013d1bb"} Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.137656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" event={"ID":"6abb698e-8c6d-40c8-b87d-dcd828bba5d3","Type":"ContainerStarted","Data":"672592b702d5bce436f905ccb4af385a9cce65e381bb71165745c7f75b18a95f"} Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.137673 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.139053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" event={"ID":"58a46d42-dade-4bfe-b9b0-bddac75f1d81","Type":"ContainerStarted","Data":"2fef2c97da03bba27c33e57c3640d262d8746473cf3762c7cddf616b2b8e4865"} Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.139082 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" event={"ID":"58a46d42-dade-4bfe-b9b0-bddac75f1d81","Type":"ContainerStarted","Data":"79d62481c4816b5936773623d6085cc1b16f51a4a279dfd1548cbe51f35eaefa"} Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.193153 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" podStartSLOduration=34.002894973 podStartE2EDuration="40.193117384s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:47.361932992 +0000 UTC m=+1040.252628415" lastFinishedPulling="2025-12-03 17:56:53.552155373 +0000 UTC m=+1046.442850826" observedRunningTime="2025-12-03 17:56:54.163727623 +0000 UTC m=+1047.054423056" watchObservedRunningTime="2025-12-03 17:56:54.193117384 +0000 UTC m=+1047.083812817" Dec 03 17:56:54 crc kubenswrapper[4687]: I1203 17:56:54.195286 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" podStartSLOduration=34.572114454 podStartE2EDuration="40.195271342s" podCreationTimestamp="2025-12-03 17:56:14 +0000 UTC" firstStartedPulling="2025-12-03 17:56:47.946037764 +0000 UTC m=+1040.836733197" lastFinishedPulling="2025-12-03 17:56:53.569194632 +0000 UTC m=+1046.459890085" observedRunningTime="2025-12-03 17:56:54.190987397 +0000 UTC m=+1047.081682840" watchObservedRunningTime="2025-12-03 17:56:54.195271342 +0000 UTC m=+1047.085966775" Dec 03 17:56:55 crc kubenswrapper[4687]: I1203 17:56:55.144520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:56:57 crc kubenswrapper[4687]: I1203 17:56:57.858991 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65f8659594-f2bcj" Dec 03 17:57:05 crc kubenswrapper[4687]: I1203 17:57:05.126991 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-sdbgv" Dec 03 17:57:06 crc kubenswrapper[4687]: I1203 17:57:06.950213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-lx2md" Dec 03 17:57:07 crc kubenswrapper[4687]: I1203 17:57:07.514741 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w" Dec 03 17:57:14 crc kubenswrapper[4687]: I1203 17:57:14.111382 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:57:14 crc kubenswrapper[4687]: I1203 17:57:14.112439 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.548547 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.550315 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.555533 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.555689 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.555786 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.557963 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zhlmq" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.568885 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.629199 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.630966 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.634832 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.637306 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.663921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jx68\" (UniqueName: \"kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.664090 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.765386 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlsk\" (UniqueName: \"kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.765452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.765483 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.765514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.765540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jx68\" (UniqueName: \"kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.766506 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.783721 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jx68\" (UniqueName: \"kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68\") pod \"dnsmasq-dns-675f4bcbfc-bkzn2\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.866725 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlsk\" (UniqueName: \"kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.866785 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.866820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.867648 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.868369 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.871657 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.887441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlsk\" (UniqueName: \"kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk\") pod \"dnsmasq-dns-78dd6ddcc-8xqg2\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:24 crc kubenswrapper[4687]: I1203 17:57:24.946465 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:25 crc kubenswrapper[4687]: I1203 17:57:25.397718 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:25 crc kubenswrapper[4687]: I1203 17:57:25.405982 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 17:57:25 crc kubenswrapper[4687]: I1203 17:57:25.452747 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:25 crc kubenswrapper[4687]: W1203 17:57:25.463784 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84359401_be55_41fd_924a_94eea80f2273.slice/crio-3d464d7068477bb6fce105be379232c246f81c9e271c4613813a82bb93e886fa WatchSource:0}: Error finding container 3d464d7068477bb6fce105be379232c246f81c9e271c4613813a82bb93e886fa: Status 404 returned error can't find the container with id 3d464d7068477bb6fce105be379232c246f81c9e271c4613813a82bb93e886fa Dec 03 17:57:26 crc kubenswrapper[4687]: I1203 17:57:26.361831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" event={"ID":"84359401-be55-41fd-924a-94eea80f2273","Type":"ContainerStarted","Data":"3d464d7068477bb6fce105be379232c246f81c9e271c4613813a82bb93e886fa"} Dec 03 17:57:26 crc kubenswrapper[4687]: I1203 17:57:26.363017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" event={"ID":"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317","Type":"ContainerStarted","Data":"7497932360d0083b87f511455ad47bb3b79e1114fb5a3e79f4346ac771d8c17a"} Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.549392 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.600737 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.602270 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.624785 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.718949 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.719088 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dl99\" (UniqueName: \"kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.719155 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.819935 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.820303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.820424 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dl99\" (UniqueName: \"kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.821214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.821636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.858823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dl99\" (UniqueName: \"kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99\") pod \"dnsmasq-dns-666b6646f7-lp4x8\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.928681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.933900 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.953360 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.954551 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:27 crc kubenswrapper[4687]: I1203 17:57:27.975176 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.123910 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.125095 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmtk\" (UniqueName: \"kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.125191 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.226838 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.226940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.226958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlmtk\" (UniqueName: \"kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.228798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.229624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.256409 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlmtk\" (UniqueName: \"kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk\") pod \"dnsmasq-dns-57d769cc4f-574kj\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.278438 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:57:28 crc kubenswrapper[4687]: W1203 17:57:28.287483 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d32d17_7c63_427d_ba0b_d45aceaea477.slice/crio-e09cd551dad5ad0f5903edceb4ae580dc6bdd6202f9ec8b85f4b66053cc9ca7b WatchSource:0}: Error finding container e09cd551dad5ad0f5903edceb4ae580dc6bdd6202f9ec8b85f4b66053cc9ca7b: Status 404 returned error can't find the container with id e09cd551dad5ad0f5903edceb4ae580dc6bdd6202f9ec8b85f4b66053cc9ca7b Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.353323 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.437369 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" event={"ID":"f2d32d17-7c63-427d-ba0b-d45aceaea477","Type":"ContainerStarted","Data":"e09cd551dad5ad0f5903edceb4ae580dc6bdd6202f9ec8b85f4b66053cc9ca7b"} Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.768449 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.769652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.773765 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.778814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.781505 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7vk44" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.782926 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.783208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.783623 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.784346 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.801107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.810046 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:57:28 crc kubenswrapper[4687]: W1203 17:57:28.820912 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4954cd1d_111f_40c5_b681_739da73ea439.slice/crio-d4fda51dd70cf6deb0c9ee5f8e3fdcc972d5e4d99ca189e1506a49353b5fe335 WatchSource:0}: Error finding container d4fda51dd70cf6deb0c9ee5f8e3fdcc972d5e4d99ca189e1506a49353b5fe335: Status 404 returned error can't find the container with id d4fda51dd70cf6deb0c9ee5f8e3fdcc972d5e4d99ca189e1506a49353b5fe335 Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947004 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947303 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947368 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5ts\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947406 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947500 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947529 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947639 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:28 crc kubenswrapper[4687]: I1203 17:57:28.947664 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049061 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049188 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049227 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb5ts\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049242 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049290 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.049759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.050050 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.050207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.050971 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.052564 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.055669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.055790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.056440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.060777 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.064567 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb5ts\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.073033 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.103296 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.114291 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.116070 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.118351 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.118688 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.118734 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.118874 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.118954 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.119003 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.119068 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-psh6c" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.123236 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254218 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254322 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254359 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254377 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254425 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254460 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkrzk\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.254484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.365627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.365703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.365820 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.365884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkrzk\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.365954 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.366133 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.366608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.370100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.370463 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.374964 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385629 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385756 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.385898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.386830 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.386860 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.388825 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.389621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.389882 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkrzk\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.417709 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.419224 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.453592 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.469853 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" event={"ID":"4954cd1d-111f-40c5-b681-739da73ea439","Type":"ContainerStarted","Data":"d4fda51dd70cf6deb0c9ee5f8e3fdcc972d5e4d99ca189e1506a49353b5fe335"} Dec 03 17:57:29 crc kubenswrapper[4687]: I1203 17:57:29.719788 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.002230 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 17:57:30 crc kubenswrapper[4687]: W1203 17:57:30.032306 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63e536c1_72f7_438c_b34c_b8750dd1796b.slice/crio-a1f26d35841cedbaf471b5d1cb248733134db2cafc47f63168dea48a48cc167b WatchSource:0}: Error finding container a1f26d35841cedbaf471b5d1cb248733134db2cafc47f63168dea48a48cc167b: Status 404 returned error can't find the container with id a1f26d35841cedbaf471b5d1cb248733134db2cafc47f63168dea48a48cc167b Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.485403 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.488287 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.490566 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-shpct" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.490873 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.493496 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.493593 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.498292 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.503304 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.520276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerStarted","Data":"a1f26d35841cedbaf471b5d1cb248733134db2cafc47f63168dea48a48cc167b"} Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.522787 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerStarted","Data":"357d9f30a290c2304ed8732053ff3d8567593144605bd340fd89a90ab6809b43"} Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.538775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-kolla-config\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.538848 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-generated\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.538913 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-default\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.538933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.538952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th5bb\" (UniqueName: \"kubernetes.io/projected/04732311-c8eb-4351-a564-78ce8c8e1811-kube-api-access-th5bb\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.539038 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.539074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.539107 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-operator-scripts\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.648990 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-default\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649055 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th5bb\" (UniqueName: \"kubernetes.io/projected/04732311-c8eb-4351-a564-78ce8c8e1811-kube-api-access-th5bb\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649294 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649746 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-operator-scripts\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649883 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-kolla-config\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.649904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-generated\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.650073 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-default\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.651319 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/04732311-c8eb-4351-a564-78ce8c8e1811-config-data-generated\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.651569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-kolla-config\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.652799 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04732311-c8eb-4351-a564-78ce8c8e1811-operator-scripts\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.662332 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.670331 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th5bb\" (UniqueName: \"kubernetes.io/projected/04732311-c8eb-4351-a564-78ce8c8e1811-kube-api-access-th5bb\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.670774 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04732311-c8eb-4351-a564-78ce8c8e1811-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.678235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"04732311-c8eb-4351-a564-78ce8c8e1811\") " pod="openstack/openstack-galera-0" Dec 03 17:57:30 crc kubenswrapper[4687]: I1203 17:57:30.820445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.460089 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.899055 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.901690 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.908826 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.908988 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tdt6n" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.909110 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.909440 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 03 17:57:31 crc kubenswrapper[4687]: I1203 17:57:31.923416 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075077 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075347 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075381 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rf9\" (UniqueName: \"kubernetes.io/projected/b00142cd-f59e-49d3-9d26-e1344598a59a-kube-api-access-57rf9\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075412 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.075435 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.178727 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.178788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rf9\" (UniqueName: \"kubernetes.io/projected/b00142cd-f59e-49d3-9d26-e1344598a59a-kube-api-access-57rf9\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.178867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.178890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.178949 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.179801 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.179843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.179891 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.181735 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.181973 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.183678 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b00142cd-f59e-49d3-9d26-e1344598a59a-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.184395 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.185899 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b00142cd-f59e-49d3-9d26-e1344598a59a-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.197961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.214975 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b00142cd-f59e-49d3-9d26-e1344598a59a-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.222167 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.229440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rf9\" (UniqueName: \"kubernetes.io/projected/b00142cd-f59e-49d3-9d26-e1344598a59a-kube-api-access-57rf9\") pod \"openstack-cell1-galera-0\" (UID: \"b00142cd-f59e-49d3-9d26-e1344598a59a\") " pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.244749 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.246027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.253839 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-jzlhx" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.255605 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.255799 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.258512 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.281271 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5wtc\" (UniqueName: \"kubernetes.io/projected/b6b36375-980f-4c1d-8ddb-61d9565db565-kube-api-access-d5wtc\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.281333 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.281420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.281446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-config-data\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.281465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-kolla-config\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.386788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.386912 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.386941 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-config-data\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.386959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-kolla-config\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.386996 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5wtc\" (UniqueName: \"kubernetes.io/projected/b6b36375-980f-4c1d-8ddb-61d9565db565-kube-api-access-d5wtc\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.388893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-kolla-config\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.389077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b6b36375-980f-4c1d-8ddb-61d9565db565-config-data\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.390960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.399582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6b36375-980f-4c1d-8ddb-61d9565db565-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.411092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5wtc\" (UniqueName: \"kubernetes.io/projected/b6b36375-980f-4c1d-8ddb-61d9565db565-kube-api-access-d5wtc\") pod \"memcached-0\" (UID: \"b6b36375-980f-4c1d-8ddb-61d9565db565\") " pod="openstack/memcached-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.528266 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 03 17:57:32 crc kubenswrapper[4687]: I1203 17:57:32.619512 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.331355 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.333178 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.334918 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qp9jm" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.342071 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.422291 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtf42\" (UniqueName: \"kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42\") pod \"kube-state-metrics-0\" (UID: \"c337091b-5995-4214-9161-18188eb806aa\") " pod="openstack/kube-state-metrics-0" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.524234 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtf42\" (UniqueName: \"kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42\") pod \"kube-state-metrics-0\" (UID: \"c337091b-5995-4214-9161-18188eb806aa\") " pod="openstack/kube-state-metrics-0" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.562179 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtf42\" (UniqueName: \"kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42\") pod \"kube-state-metrics-0\" (UID: \"c337091b-5995-4214-9161-18188eb806aa\") " pod="openstack/kube-state-metrics-0" Dec 03 17:57:34 crc kubenswrapper[4687]: I1203 17:57:34.648309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.022661 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lczs"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.024112 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.026297 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.026790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f7nr6" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.029707 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.032787 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-gtnmq"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.038142 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.045386 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.055917 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gtnmq"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.082514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-combined-ca-bundle\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.082593 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.082680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2642fdf0-56b9-4b22-ace6-cde247a8f08e-scripts\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.082709 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-etc-ovs\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.082902 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-lib\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083071 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-ovn-controller-tls-certs\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-log-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083221 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89zb\" (UniqueName: \"kubernetes.io/projected/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-kube-api-access-k89zb\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-scripts\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083379 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-run\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083434 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bnvw\" (UniqueName: \"kubernetes.io/projected/2642fdf0-56b9-4b22-ace6-cde247a8f08e-kube-api-access-4bnvw\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.083482 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-log\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184791 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-etc-ovs\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-lib\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184905 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-ovn-controller-tls-certs\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184928 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-log-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184942 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k89zb\" (UniqueName: \"kubernetes.io/projected/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-kube-api-access-k89zb\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184964 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-scripts\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.184998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-run\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.185015 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bnvw\" (UniqueName: \"kubernetes.io/projected/2642fdf0-56b9-4b22-ace6-cde247a8f08e-kube-api-access-4bnvw\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.185032 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-log\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.185062 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-combined-ca-bundle\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.185093 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.185261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2642fdf0-56b9-4b22-ace6-cde247a8f08e-scripts\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.188075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-etc-ovs\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.188195 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-lib\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.187955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2642fdf0-56b9-4b22-ace6-cde247a8f08e-scripts\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.188344 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-log-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.189027 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-log\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.189258 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2642fdf0-56b9-4b22-ace6-cde247a8f08e-var-run\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.193862 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-combined-ca-bundle\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.198220 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run-ovn\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.198635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-ovn-controller-tls-certs\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.200159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-var-run\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.202594 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-scripts\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.202859 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bnvw\" (UniqueName: \"kubernetes.io/projected/2642fdf0-56b9-4b22-ace6-cde247a8f08e-kube-api-access-4bnvw\") pod \"ovn-controller-ovs-gtnmq\" (UID: \"2642fdf0-56b9-4b22-ace6-cde247a8f08e\") " pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.221491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k89zb\" (UniqueName: \"kubernetes.io/projected/3037eba1-1fab-4d56-a3f0-1cecb58b3f7a-kube-api-access-k89zb\") pod \"ovn-controller-2lczs\" (UID: \"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a\") " pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.348955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.363706 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.575376 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.576753 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.579561 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.580205 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.580320 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.580377 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.580538 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-2czzb" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.589829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"04732311-c8eb-4351-a564-78ce8c8e1811","Type":"ContainerStarted","Data":"64790b3f6b486035374bc70e2520af3ad5e4303adae440ce172230a34521e04f"} Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.598246 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.690670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-config\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.690728 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.690848 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64lj\" (UniqueName: \"kubernetes.io/projected/aff56e13-4338-42bd-a378-b0d72daa296e-kube-api-access-p64lj\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.690945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.691022 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.691109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.691166 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.691226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792605 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792860 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792896 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792941 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-config\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792960 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.792995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64lj\" (UniqueName: \"kubernetes.io/projected/aff56e13-4338-42bd-a378-b0d72daa296e-kube-api-access-p64lj\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.793033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.793083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.793478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.793898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-config\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.793993 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aff56e13-4338-42bd-a378-b0d72daa296e-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.795082 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.797636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.798039 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.798329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aff56e13-4338-42bd-a378-b0d72daa296e-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.812826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64lj\" (UniqueName: \"kubernetes.io/projected/aff56e13-4338-42bd-a378-b0d72daa296e-kube-api-access-p64lj\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.819441 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"aff56e13-4338-42bd-a378-b0d72daa296e\") " pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:38 crc kubenswrapper[4687]: I1203 17:57:38.902928 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.004742 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.006664 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.011492 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.011649 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-srvbm" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.011761 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.011842 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.012741 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139276 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139322 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139367 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139415 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp66k\" (UniqueName: \"kubernetes.io/projected/2e41fb58-0d75-4204-85eb-7c5526d637e6-kube-api-access-pp66k\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139439 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139504 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139575 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.139591 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.241742 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.241813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.241908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.241953 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.242105 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.242539 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.242079 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.243543 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp66k\" (UniqueName: \"kubernetes.io/projected/2e41fb58-0d75-4204-85eb-7c5526d637e6-kube-api-access-pp66k\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.243621 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.243693 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.245527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.247182 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e41fb58-0d75-4204-85eb-7c5526d637e6-config\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.247634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.247946 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.256402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e41fb58-0d75-4204-85eb-7c5526d637e6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.263193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.270528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp66k\" (UniqueName: \"kubernetes.io/projected/2e41fb58-0d75-4204-85eb-7c5526d637e6-kube-api-access-pp66k\") pod \"ovsdbserver-sb-0\" (UID: \"2e41fb58-0d75-4204-85eb-7c5526d637e6\") " pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:42 crc kubenswrapper[4687]: I1203 17:57:42.323985 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 03 17:57:44 crc kubenswrapper[4687]: I1203 17:57:44.111672 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:57:44 crc kubenswrapper[4687]: I1203 17:57:44.112207 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:57:44 crc kubenswrapper[4687]: I1203 17:57:44.835589 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.424690 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.424842 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb5ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.426011 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.429262 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.429539 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkrzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(63e536c1-72f7-438c-b34c-b8750dd1796b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.430760 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.641347 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" Dec 03 17:57:45 crc kubenswrapper[4687]: E1203 17:57:45.641543 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" Dec 03 17:57:50 crc kubenswrapper[4687]: I1203 17:57:50.680972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c337091b-5995-4214-9161-18188eb806aa","Type":"ContainerStarted","Data":"cdfab4f6fe124ddd0854f023b10cf3ce89dc10a8c88a9dd87321220a40b1d24a"} Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.282895 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.283399 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dl99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-lp4x8_openstack(f2d32d17-7c63-427d-ba0b-d45aceaea477): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.283175 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.283629 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xlsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8xqg2_openstack(84359401-be55-41fd-924a-94eea80f2273): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.285428 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.286902 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" podUID="84359401-be55-41fd-924a-94eea80f2273" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.287678 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.287801 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jx68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bkzn2_openstack(02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.288923 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" podUID="02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.336111 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.336260 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlmtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-574kj_openstack(4954cd1d-111f-40c5-b681-739da73ea439): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.340441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" podUID="4954cd1d-111f-40c5-b681-739da73ea439" Dec 03 17:57:51 crc kubenswrapper[4687]: I1203 17:57:51.708733 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs"] Dec 03 17:57:51 crc kubenswrapper[4687]: I1203 17:57:51.718514 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"04732311-c8eb-4351-a564-78ce8c8e1811","Type":"ContainerStarted","Data":"3bd29a0c98da5a06e1674ac631ebec5e682abdcda1e56f05f3534063d498ced2"} Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.722563 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" Dec 03 17:57:51 crc kubenswrapper[4687]: E1203 17:57:51.722955 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" podUID="4954cd1d-111f-40c5-b681-739da73ea439" Dec 03 17:57:51 crc kubenswrapper[4687]: I1203 17:57:51.813966 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 03 17:57:51 crc kubenswrapper[4687]: I1203 17:57:51.832470 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 03 17:57:51 crc kubenswrapper[4687]: I1203 17:57:51.968108 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.072574 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.169817 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-gtnmq"] Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.203209 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2642fdf0_56b9_4b22_ace6_cde247a8f08e.slice/crio-42734d962db64b2eab6bf1a84e15f2dfca46105814bb3203c26475aeb9dacb78 WatchSource:0}: Error finding container 42734d962db64b2eab6bf1a84e15f2dfca46105814bb3203c26475aeb9dacb78: Status 404 returned error can't find the container with id 42734d962db64b2eab6bf1a84e15f2dfca46105814bb3203c26475aeb9dacb78 Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.262412 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6b36375_980f_4c1d_8ddb_61d9565db565.slice/crio-8701f98e4795052196f0720268fe8dd0ac21c4c7b47500a7e0a1ee93f209b9b6 WatchSource:0}: Error finding container 8701f98e4795052196f0720268fe8dd0ac21c4c7b47500a7e0a1ee93f209b9b6: Status 404 returned error can't find the container with id 8701f98e4795052196f0720268fe8dd0ac21c4c7b47500a7e0a1ee93f209b9b6 Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.266565 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff56e13_4338_42bd_a378_b0d72daa296e.slice/crio-1546f57a71f648e97fa224d95d465c4dc28532a9c6307309311198d2bbe16c92 WatchSource:0}: Error finding container 1546f57a71f648e97fa224d95d465c4dc28532a9c6307309311198d2bbe16c92: Status 404 returned error can't find the container with id 1546f57a71f648e97fa224d95d465c4dc28532a9c6307309311198d2bbe16c92 Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.269421 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e41fb58_0d75_4204_85eb_7c5526d637e6.slice/crio-08c87a5ac69e6864abb54e223f8d8d4689cbcbd1bad1dcb95c91be9fdb7a95cb WatchSource:0}: Error finding container 08c87a5ac69e6864abb54e223f8d8d4689cbcbd1bad1dcb95c91be9fdb7a95cb: Status 404 returned error can't find the container with id 08c87a5ac69e6864abb54e223f8d8d4689cbcbd1bad1dcb95c91be9fdb7a95cb Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.271111 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb00142cd_f59e_49d3_9d26_e1344598a59a.slice/crio-ef62c256ec8199b49820d6ac8a54a7a3415f215679e184d1d9ce0ddb844c94df WatchSource:0}: Error finding container ef62c256ec8199b49820d6ac8a54a7a3415f215679e184d1d9ce0ddb844c94df: Status 404 returned error can't find the container with id ef62c256ec8199b49820d6ac8a54a7a3415f215679e184d1d9ce0ddb844c94df Dec 03 17:57:52 crc kubenswrapper[4687]: W1203 17:57:52.273199 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3037eba1_1fab_4d56_a3f0_1cecb58b3f7a.slice/crio-9973c07a02812d55f3b7f9453494525e91e6a2fcda89f5f11be7eb7ed6ac581b WatchSource:0}: Error finding container 9973c07a02812d55f3b7f9453494525e91e6a2fcda89f5f11be7eb7ed6ac581b: Status 404 returned error can't find the container with id 9973c07a02812d55f3b7f9453494525e91e6a2fcda89f5f11be7eb7ed6ac581b Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.335861 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.346282 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.533368 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config\") pod \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.533875 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jx68\" (UniqueName: \"kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68\") pod \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\" (UID: \"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317\") " Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.533952 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlsk\" (UniqueName: \"kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk\") pod \"84359401-be55-41fd-924a-94eea80f2273\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.533990 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config\") pod \"84359401-be55-41fd-924a-94eea80f2273\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.534023 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config" (OuterVolumeSpecName: "config") pod "02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317" (UID: "02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.534048 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc\") pod \"84359401-be55-41fd-924a-94eea80f2273\" (UID: \"84359401-be55-41fd-924a-94eea80f2273\") " Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.534541 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.535952 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84359401-be55-41fd-924a-94eea80f2273" (UID: "84359401-be55-41fd-924a-94eea80f2273"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.535958 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config" (OuterVolumeSpecName: "config") pod "84359401-be55-41fd-924a-94eea80f2273" (UID: "84359401-be55-41fd-924a-94eea80f2273"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.536864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk" (OuterVolumeSpecName: "kube-api-access-6xlsk") pod "84359401-be55-41fd-924a-94eea80f2273" (UID: "84359401-be55-41fd-924a-94eea80f2273"). InnerVolumeSpecName "kube-api-access-6xlsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.542019 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68" (OuterVolumeSpecName: "kube-api-access-2jx68") pod "02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317" (UID: "02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317"). InnerVolumeSpecName "kube-api-access-2jx68". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.635887 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jx68\" (UniqueName: \"kubernetes.io/projected/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317-kube-api-access-2jx68\") on node \"crc\" DevicePath \"\"" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.635921 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlsk\" (UniqueName: \"kubernetes.io/projected/84359401-be55-41fd-924a-94eea80f2273-kube-api-access-6xlsk\") on node \"crc\" DevicePath \"\"" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.635935 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.635947 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84359401-be55-41fd-924a-94eea80f2273-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.727175 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" event={"ID":"02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317","Type":"ContainerDied","Data":"7497932360d0083b87f511455ad47bb3b79e1114fb5a3e79f4346ac771d8c17a"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.727232 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bkzn2" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.728932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2e41fb58-0d75-4204-85eb-7c5526d637e6","Type":"ContainerStarted","Data":"08c87a5ac69e6864abb54e223f8d8d4689cbcbd1bad1dcb95c91be9fdb7a95cb"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.730693 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6b36375-980f-4c1d-8ddb-61d9565db565","Type":"ContainerStarted","Data":"8701f98e4795052196f0720268fe8dd0ac21c4c7b47500a7e0a1ee93f209b9b6"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.732932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b00142cd-f59e-49d3-9d26-e1344598a59a","Type":"ContainerStarted","Data":"ef62c256ec8199b49820d6ac8a54a7a3415f215679e184d1d9ce0ddb844c94df"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.734783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs" event={"ID":"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a","Type":"ContainerStarted","Data":"9973c07a02812d55f3b7f9453494525e91e6a2fcda89f5f11be7eb7ed6ac581b"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.736050 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" event={"ID":"84359401-be55-41fd-924a-94eea80f2273","Type":"ContainerDied","Data":"3d464d7068477bb6fce105be379232c246f81c9e271c4613813a82bb93e886fa"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.736107 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8xqg2" Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.741677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtnmq" event={"ID":"2642fdf0-56b9-4b22-ace6-cde247a8f08e","Type":"ContainerStarted","Data":"42734d962db64b2eab6bf1a84e15f2dfca46105814bb3203c26475aeb9dacb78"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.743053 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aff56e13-4338-42bd-a378-b0d72daa296e","Type":"ContainerStarted","Data":"1546f57a71f648e97fa224d95d465c4dc28532a9c6307309311198d2bbe16c92"} Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.800272 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.815796 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8xqg2"] Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.831574 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:52 crc kubenswrapper[4687]: I1203 17:57:52.847981 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bkzn2"] Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.420821 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317" path="/var/lib/kubelet/pods/02c6b7d9-0d6d-408a-91f4-6f2b0fbaf317/volumes" Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.421468 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84359401-be55-41fd-924a-94eea80f2273" path="/var/lib/kubelet/pods/84359401-be55-41fd-924a-94eea80f2273/volumes" Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.754652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c337091b-5995-4214-9161-18188eb806aa","Type":"ContainerStarted","Data":"d839dd50b9f29bcfc89ca4af10103c012516b56e1588681a56a8d2b9c30150a3"} Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.754779 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.756468 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b00142cd-f59e-49d3-9d26-e1344598a59a","Type":"ContainerStarted","Data":"c63aa05d956b0c7eb7f7d5c1c689f49122679f6957a03660cc6429b9c0429344"} Dec 03 17:57:53 crc kubenswrapper[4687]: I1203 17:57:53.773797 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.160119719 podStartE2EDuration="19.77378152s" podCreationTimestamp="2025-12-03 17:57:34 +0000 UTC" firstStartedPulling="2025-12-03 17:57:50.559825094 +0000 UTC m=+1103.450520567" lastFinishedPulling="2025-12-03 17:57:53.173486895 +0000 UTC m=+1106.064182368" observedRunningTime="2025-12-03 17:57:53.771164689 +0000 UTC m=+1106.661860142" watchObservedRunningTime="2025-12-03 17:57:53.77378152 +0000 UTC m=+1106.664476953" Dec 03 17:57:55 crc kubenswrapper[4687]: I1203 17:57:55.770176 4687 generic.go:334] "Generic (PLEG): container finished" podID="04732311-c8eb-4351-a564-78ce8c8e1811" containerID="3bd29a0c98da5a06e1674ac631ebec5e682abdcda1e56f05f3534063d498ced2" exitCode=0 Dec 03 17:57:55 crc kubenswrapper[4687]: I1203 17:57:55.770259 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"04732311-c8eb-4351-a564-78ce8c8e1811","Type":"ContainerDied","Data":"3bd29a0c98da5a06e1674ac631ebec5e682abdcda1e56f05f3534063d498ced2"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.778271 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtnmq" event={"ID":"2642fdf0-56b9-4b22-ace6-cde247a8f08e","Type":"ContainerStarted","Data":"b201969f094c37819a38f5dbec4da85c720c98d75681e0d604799ff3fd111779"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.780085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aff56e13-4338-42bd-a378-b0d72daa296e","Type":"ContainerStarted","Data":"cbed8238ea4ca26093557b6ef2d61af1f46dd0d50ac7626723aed9d070c03372"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.781737 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2e41fb58-0d75-4204-85eb-7c5526d637e6","Type":"ContainerStarted","Data":"7385f7215fe2e42f2f95d820020fcae31df7b2b37176902b245c89d65c54bd01"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.783375 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b6b36375-980f-4c1d-8ddb-61d9565db565","Type":"ContainerStarted","Data":"7e66d0ae42241ae1d07bab604d335d68032bb3dca8830b2a74c66c104476c123"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.783494 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.784979 4687 generic.go:334] "Generic (PLEG): container finished" podID="b00142cd-f59e-49d3-9d26-e1344598a59a" containerID="c63aa05d956b0c7eb7f7d5c1c689f49122679f6957a03660cc6429b9c0429344" exitCode=0 Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.785036 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b00142cd-f59e-49d3-9d26-e1344598a59a","Type":"ContainerDied","Data":"c63aa05d956b0c7eb7f7d5c1c689f49122679f6957a03660cc6429b9c0429344"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.786236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs" event={"ID":"3037eba1-1fab-4d56-a3f0-1cecb58b3f7a","Type":"ContainerStarted","Data":"ec7cce941f052d124b92c9d07d9100ad38a79e1cb5ccc0cf0ee9caad044eda40"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.786664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-2lczs" Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.788521 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"04732311-c8eb-4351-a564-78ce8c8e1811","Type":"ContainerStarted","Data":"a4a19504e2e44f5976b49013e1905290264042bfbc144d0ad25d53989fb0a735"} Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.826280 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2lczs" podStartSLOduration=15.898212241 podStartE2EDuration="19.826262647s" podCreationTimestamp="2025-12-03 17:57:37 +0000 UTC" firstStartedPulling="2025-12-03 17:57:52.27500744 +0000 UTC m=+1105.165702873" lastFinishedPulling="2025-12-03 17:57:56.203057846 +0000 UTC m=+1109.093753279" observedRunningTime="2025-12-03 17:57:56.819846014 +0000 UTC m=+1109.710541447" watchObservedRunningTime="2025-12-03 17:57:56.826262647 +0000 UTC m=+1109.716958080" Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.876378 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.500762916 podStartE2EDuration="27.876363115s" podCreationTimestamp="2025-12-03 17:57:29 +0000 UTC" firstStartedPulling="2025-12-03 17:57:37.911216371 +0000 UTC m=+1090.801911804" lastFinishedPulling="2025-12-03 17:57:51.28681655 +0000 UTC m=+1104.177512003" observedRunningTime="2025-12-03 17:57:56.871471804 +0000 UTC m=+1109.762167257" watchObservedRunningTime="2025-12-03 17:57:56.876363115 +0000 UTC m=+1109.767058548" Dec 03 17:57:56 crc kubenswrapper[4687]: I1203 17:57:56.892931 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.998177684 podStartE2EDuration="24.892909921s" podCreationTimestamp="2025-12-03 17:57:32 +0000 UTC" firstStartedPulling="2025-12-03 17:57:52.265767472 +0000 UTC m=+1105.156462905" lastFinishedPulling="2025-12-03 17:57:56.160499709 +0000 UTC m=+1109.051195142" observedRunningTime="2025-12-03 17:57:56.886261453 +0000 UTC m=+1109.776956886" watchObservedRunningTime="2025-12-03 17:57:56.892909921 +0000 UTC m=+1109.783605354" Dec 03 17:57:57 crc kubenswrapper[4687]: I1203 17:57:57.802904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b00142cd-f59e-49d3-9d26-e1344598a59a","Type":"ContainerStarted","Data":"8ef76450659a0928856479bf7acbf624d5e06bc02dc793b985448af9ecf67b45"} Dec 03 17:57:57 crc kubenswrapper[4687]: I1203 17:57:57.808566 4687 generic.go:334] "Generic (PLEG): container finished" podID="2642fdf0-56b9-4b22-ace6-cde247a8f08e" containerID="b201969f094c37819a38f5dbec4da85c720c98d75681e0d604799ff3fd111779" exitCode=0 Dec 03 17:57:57 crc kubenswrapper[4687]: I1203 17:57:57.808651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtnmq" event={"ID":"2642fdf0-56b9-4b22-ace6-cde247a8f08e","Type":"ContainerDied","Data":"b201969f094c37819a38f5dbec4da85c720c98d75681e0d604799ff3fd111779"} Dec 03 17:57:57 crc kubenswrapper[4687]: I1203 17:57:57.830427 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.830406207 podStartE2EDuration="27.830406207s" podCreationTimestamp="2025-12-03 17:57:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:57:57.822547095 +0000 UTC m=+1110.713242548" watchObservedRunningTime="2025-12-03 17:57:57.830406207 +0000 UTC m=+1110.721101640" Dec 03 17:57:58 crc kubenswrapper[4687]: I1203 17:57:58.816607 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerStarted","Data":"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16"} Dec 03 17:57:59 crc kubenswrapper[4687]: I1203 17:57:59.828638 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtnmq" event={"ID":"2642fdf0-56b9-4b22-ace6-cde247a8f08e","Type":"ContainerStarted","Data":"cd16109ff342c656c2c927e6cb2ae82329c74068583ffa084928ec4930052f9f"} Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.821346 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.821788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.836040 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2e41fb58-0d75-4204-85eb-7c5526d637e6","Type":"ContainerStarted","Data":"1a6ef861aa15d8c9a5c73e04f55e671981a61b6914eb466c759966dd26f77093"} Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.841009 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-gtnmq" event={"ID":"2642fdf0-56b9-4b22-ace6-cde247a8f08e","Type":"ContainerStarted","Data":"17c4e682a475c8d311a348e59d14638b3a7d453f701fe0268d557882a37b5cbf"} Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.841416 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.841466 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.843231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"aff56e13-4338-42bd-a378-b0d72daa296e","Type":"ContainerStarted","Data":"c63c348014d7d442bad73db71ad09180a9fa4ee17ec188d2cb4e090a7f6a2eac"} Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.858831 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.80610501 podStartE2EDuration="20.858816476s" podCreationTimestamp="2025-12-03 17:57:40 +0000 UTC" firstStartedPulling="2025-12-03 17:57:52.272303797 +0000 UTC m=+1105.162999230" lastFinishedPulling="2025-12-03 17:57:59.325015263 +0000 UTC m=+1112.215710696" observedRunningTime="2025-12-03 17:58:00.856832752 +0000 UTC m=+1113.747528185" watchObservedRunningTime="2025-12-03 17:58:00.858816476 +0000 UTC m=+1113.749511899" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.880806 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=16.832264544 podStartE2EDuration="23.880786347s" podCreationTimestamp="2025-12-03 17:57:37 +0000 UTC" firstStartedPulling="2025-12-03 17:57:52.26867477 +0000 UTC m=+1105.159370203" lastFinishedPulling="2025-12-03 17:57:59.317196563 +0000 UTC m=+1112.207892006" observedRunningTime="2025-12-03 17:58:00.874870198 +0000 UTC m=+1113.765565641" watchObservedRunningTime="2025-12-03 17:58:00.880786347 +0000 UTC m=+1113.771481780" Dec 03 17:58:00 crc kubenswrapper[4687]: I1203 17:58:00.898004 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-gtnmq" podStartSLOduration=18.93145903 podStartE2EDuration="22.897987091s" podCreationTimestamp="2025-12-03 17:57:38 +0000 UTC" firstStartedPulling="2025-12-03 17:57:52.20520254 +0000 UTC m=+1105.095897973" lastFinishedPulling="2025-12-03 17:57:56.171730601 +0000 UTC m=+1109.062426034" observedRunningTime="2025-12-03 17:58:00.893810618 +0000 UTC m=+1113.784506061" watchObservedRunningTime="2025-12-03 17:58:00.897987091 +0000 UTC m=+1113.788682524" Dec 03 17:58:01 crc kubenswrapper[4687]: I1203 17:58:01.853056 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerStarted","Data":"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d"} Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.324062 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.529684 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.529747 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.604551 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.621089 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.904183 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.959560 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 03 17:58:02 crc kubenswrapper[4687]: I1203 17:58:02.973161 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.324184 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.385772 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.877341 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerID="aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf" exitCode=0 Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.877407 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" event={"ID":"f2d32d17-7c63-427d-ba0b-d45aceaea477","Type":"ContainerDied","Data":"aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf"} Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.879469 4687 generic.go:334] "Generic (PLEG): container finished" podID="4954cd1d-111f-40c5-b681-739da73ea439" containerID="0fbc663e92ecc08431b3847ef10aef70166f73559f3cbaec8a98ff2088134971" exitCode=0 Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.879842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" event={"ID":"4954cd1d-111f-40c5-b681-739da73ea439","Type":"ContainerDied","Data":"0fbc663e92ecc08431b3847ef10aef70166f73559f3cbaec8a98ff2088134971"} Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.880359 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.929884 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.932256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 03 17:58:03 crc kubenswrapper[4687]: I1203 17:58:03.956738 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.069199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.205335 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.226543 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.227722 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.229758 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.240656 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.302409 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4sqs2"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.303559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.305863 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.319987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4sqs2"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.356928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53de0da8-3b25-403a-9956-79082a62780b-config\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.356988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovs-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357203 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbf7\" (UniqueName: \"kubernetes.io/projected/53de0da8-3b25-403a-9956-79082a62780b-kube-api-access-jxbf7\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bvfh\" (UniqueName: \"kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357567 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovn-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357600 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357684 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.357734 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-combined-ca-bundle\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.394033 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.433060 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.435399 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.438382 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.458571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bvfh\" (UniqueName: \"kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.458868 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovn-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.458959 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459059 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459153 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-combined-ca-bundle\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53de0da8-3b25-403a-9956-79082a62780b-config\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459326 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459403 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovs-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459471 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459547 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbf7\" (UniqueName: \"kubernetes.io/projected/53de0da8-3b25-403a-9956-79082a62780b-kube-api-access-jxbf7\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459196 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovn-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.459924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/53de0da8-3b25-403a-9956-79082a62780b-ovs-rundir\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.460040 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.460687 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53de0da8-3b25-403a-9956-79082a62780b-config\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.460848 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.464615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.466724 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.469893 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53de0da8-3b25-403a-9956-79082a62780b-combined-ca-bundle\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.472798 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.506959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bvfh\" (UniqueName: \"kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh\") pod \"dnsmasq-dns-7fd796d7df-c4zz6\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.510595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbf7\" (UniqueName: \"kubernetes.io/projected/53de0da8-3b25-403a-9956-79082a62780b-kube-api-access-jxbf7\") pod \"ovn-controller-metrics-4sqs2\" (UID: \"53de0da8-3b25-403a-9956-79082a62780b\") " pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.563392 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.566157 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.567401 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.573896 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.581070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.581378 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.581418 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.581436 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.581503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsg6n\" (UniqueName: \"kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.587417 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.587844 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.588368 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p25m2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.618545 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4sqs2" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.676540 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.684098 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691516 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691561 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691584 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691633 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691659 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsg6n\" (UniqueName: \"kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691698 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-scripts\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkvmq\" (UniqueName: \"kubernetes.io/projected/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-kube-api-access-hkvmq\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-config\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.691832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.692587 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.695232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.695866 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.702436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.736103 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsg6n\" (UniqueName: \"kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n\") pod \"dnsmasq-dns-86db49b7ff-jrhlf\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.750019 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.793827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-scripts\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794088 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkvmq\" (UniqueName: \"kubernetes.io/projected/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-kube-api-access-hkvmq\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794158 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-config\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.794260 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.797238 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-scripts\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.797759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-config\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.800806 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.807232 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.809489 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.812469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.815106 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.836414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.838014 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.843668 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkvmq\" (UniqueName: \"kubernetes.io/projected/fe36f76e-b5b2-4dfe-923b-0516ea0af76f-kube-api-access-hkvmq\") pod \"ovn-northd-0\" (UID: \"fe36f76e-b5b2-4dfe-923b-0516ea0af76f\") " pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.859955 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.905909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" event={"ID":"4954cd1d-111f-40c5-b681-739da73ea439","Type":"ContainerStarted","Data":"edd723eda3fae66589b6d7f02485709a92c2fee551845f6942abec7cabff6dbb"} Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.906005 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.905999 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="dnsmasq-dns" containerID="cri-o://edd723eda3fae66589b6d7f02485709a92c2fee551845f6942abec7cabff6dbb" gracePeriod=10 Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.916603 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="dnsmasq-dns" containerID="cri-o://41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a" gracePeriod=10 Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.917369 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" event={"ID":"f2d32d17-7c63-427d-ba0b-d45aceaea477","Type":"ContainerStarted","Data":"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a"} Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.920210 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.923498 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.956851 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" podStartSLOduration=3.154533058 podStartE2EDuration="37.956830798s" podCreationTimestamp="2025-12-03 17:57:27 +0000 UTC" firstStartedPulling="2025-12-03 17:57:28.291332715 +0000 UTC m=+1081.182028148" lastFinishedPulling="2025-12-03 17:58:03.093630455 +0000 UTC m=+1115.984325888" observedRunningTime="2025-12-03 17:58:04.949646244 +0000 UTC m=+1117.840341687" watchObservedRunningTime="2025-12-03 17:58:04.956830798 +0000 UTC m=+1117.847526231" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.959703 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" podStartSLOduration=3.688222753 podStartE2EDuration="37.959686014s" podCreationTimestamp="2025-12-03 17:57:27 +0000 UTC" firstStartedPulling="2025-12-03 17:57:28.823567362 +0000 UTC m=+1081.714262795" lastFinishedPulling="2025-12-03 17:58:03.095030623 +0000 UTC m=+1115.985726056" observedRunningTime="2025-12-03 17:58:04.927905748 +0000 UTC m=+1117.818601181" watchObservedRunningTime="2025-12-03 17:58:04.959686014 +0000 UTC m=+1117.850381447" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.998521 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.998646 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.998737 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wgs\" (UniqueName: \"kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.998788 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:04 crc kubenswrapper[4687]: I1203 17:58:04.998811 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.119757 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.120086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.120146 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wgs\" (UniqueName: \"kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.120196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.120219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.121168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.121259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.121822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.128556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.136896 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wgs\" (UniqueName: \"kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs\") pod \"dnsmasq-dns-698758b865-t5dt9\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.186920 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.190101 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.281373 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4sqs2"] Dec 03 17:58:05 crc kubenswrapper[4687]: W1203 17:58:05.304295 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53de0da8_3b25_403a_9956_79082a62780b.slice/crio-5357468835a3541619052296721de0d63ccb68ff8aae118d195979a0968fa574 WatchSource:0}: Error finding container 5357468835a3541619052296721de0d63ccb68ff8aae118d195979a0968fa574: Status 404 returned error can't find the container with id 5357468835a3541619052296721de0d63ccb68ff8aae118d195979a0968fa574 Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.390599 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.457625 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.533684 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 03 17:58:05 crc kubenswrapper[4687]: W1203 17:58:05.569674 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe36f76e_b5b2_4dfe_923b_0516ea0af76f.slice/crio-9d77871ec89b3ee8ea330bcbc1af94718dfa4f3a9b22fd3a90a4293492fe3403 WatchSource:0}: Error finding container 9d77871ec89b3ee8ea330bcbc1af94718dfa4f3a9b22fd3a90a4293492fe3403: Status 404 returned error can't find the container with id 9d77871ec89b3ee8ea330bcbc1af94718dfa4f3a9b22fd3a90a4293492fe3403 Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.696434 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.834012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config\") pod \"f2d32d17-7c63-427d-ba0b-d45aceaea477\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.834109 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc\") pod \"f2d32d17-7c63-427d-ba0b-d45aceaea477\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.834200 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dl99\" (UniqueName: \"kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99\") pod \"f2d32d17-7c63-427d-ba0b-d45aceaea477\" (UID: \"f2d32d17-7c63-427d-ba0b-d45aceaea477\") " Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.838571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99" (OuterVolumeSpecName: "kube-api-access-8dl99") pod "f2d32d17-7c63-427d-ba0b-d45aceaea477" (UID: "f2d32d17-7c63-427d-ba0b-d45aceaea477"). InnerVolumeSpecName "kube-api-access-8dl99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.872514 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2d32d17-7c63-427d-ba0b-d45aceaea477" (UID: "f2d32d17-7c63-427d-ba0b-d45aceaea477"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.878028 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config" (OuterVolumeSpecName: "config") pod "f2d32d17-7c63-427d-ba0b-d45aceaea477" (UID: "f2d32d17-7c63-427d-ba0b-d45aceaea477"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.925270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4sqs2" event={"ID":"53de0da8-3b25-403a-9956-79082a62780b","Type":"ContainerStarted","Data":"5357468835a3541619052296721de0d63ccb68ff8aae118d195979a0968fa574"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.931341 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerID="41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a" exitCode=0 Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.931403 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.931398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" event={"ID":"f2d32d17-7c63-427d-ba0b-d45aceaea477","Type":"ContainerDied","Data":"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.931483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-lp4x8" event={"ID":"f2d32d17-7c63-427d-ba0b-d45aceaea477","Type":"ContainerDied","Data":"e09cd551dad5ad0f5903edceb4ae580dc6bdd6202f9ec8b85f4b66053cc9ca7b"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.931506 4687 scope.go:117] "RemoveContainer" containerID="41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.932715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t5dt9" event={"ID":"a3522527-5e9c-4148-89ea-890feca4df8b","Type":"ContainerStarted","Data":"6e7368619a3c403923acac3cf2fe06eac18a70816cec77e66dd7478b19f4ea1f"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.933654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" event={"ID":"d1765443-5034-4dc1-a4f6-c23f73da8dc3","Type":"ContainerStarted","Data":"42a06f6ff47a54f208d066b329329cb3430a03b9dcc6fbd7a37dfbca57f208c2"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.934627 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" event={"ID":"13a2fc61-ea0f-4044-93a3-0d18611d756c","Type":"ContainerStarted","Data":"8a92733f4004d9dc4a82c3e7188c568e5ae5630535ccb0058a2f81fcc0feb4f8"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.936341 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dl99\" (UniqueName: \"kubernetes.io/projected/f2d32d17-7c63-427d-ba0b-d45aceaea477-kube-api-access-8dl99\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.936359 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.936367 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2d32d17-7c63-427d-ba0b-d45aceaea477-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.937059 4687 generic.go:334] "Generic (PLEG): container finished" podID="4954cd1d-111f-40c5-b681-739da73ea439" containerID="edd723eda3fae66589b6d7f02485709a92c2fee551845f6942abec7cabff6dbb" exitCode=0 Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.937096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" event={"ID":"4954cd1d-111f-40c5-b681-739da73ea439","Type":"ContainerDied","Data":"edd723eda3fae66589b6d7f02485709a92c2fee551845f6942abec7cabff6dbb"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.938199 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe36f76e-b5b2-4dfe-923b-0516ea0af76f","Type":"ContainerStarted","Data":"9d77871ec89b3ee8ea330bcbc1af94718dfa4f3a9b22fd3a90a4293492fe3403"} Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.951206 4687 scope.go:117] "RemoveContainer" containerID="aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.963702 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.974015 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-lp4x8"] Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.978269 4687 scope.go:117] "RemoveContainer" containerID="41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a" Dec 03 17:58:05 crc kubenswrapper[4687]: E1203 17:58:05.978718 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a\": container with ID starting with 41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a not found: ID does not exist" containerID="41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.978749 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a"} err="failed to get container status \"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a\": rpc error: code = NotFound desc = could not find container \"41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a\": container with ID starting with 41855a561efb8e70b63585032b6c03b6300910c4f8bcac9c386ac4dc7d1ac62a not found: ID does not exist" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.978775 4687 scope.go:117] "RemoveContainer" containerID="aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf" Dec 03 17:58:05 crc kubenswrapper[4687]: E1203 17:58:05.979310 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf\": container with ID starting with aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf not found: ID does not exist" containerID="aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf" Dec 03 17:58:05 crc kubenswrapper[4687]: I1203 17:58:05.979333 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf"} err="failed to get container status \"aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf\": rpc error: code = NotFound desc = could not find container \"aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf\": container with ID starting with aeadd076a29848166ed46737fbee211c1ddd1c5a38e3d3cd2027bf50e0549ddf not found: ID does not exist" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.009276 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.009899 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.010026 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.010162 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="init" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.010233 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="init" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.010496 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.025371 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.027941 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.029811 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-8t7hd" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.030268 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.030681 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.030860 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.138968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-cache\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.139006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.139038 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-lock\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.139061 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.139308 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bd2q\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-kube-api-access-6bd2q\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.240249 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-cache\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.240299 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.240334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-lock\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.240356 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.240399 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bd2q\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-kube-api-access-6bd2q\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.241197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-cache\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.241472 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.243765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ab57f25f-0766-479b-ba47-e0b90c955b0d-lock\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.244691 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.244724 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.244785 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift podName:ab57f25f-0766-479b-ba47-e0b90c955b0d nodeName:}" failed. No retries permitted until 2025-12-03 17:58:06.744763889 +0000 UTC m=+1119.635459322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift") pod "swift-storage-0" (UID: "ab57f25f-0766-479b-ba47-e0b90c955b0d") : configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.261846 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bd2q\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-kube-api-access-6bd2q\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.270204 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.419618 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.443179 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlmtk\" (UniqueName: \"kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk\") pod \"4954cd1d-111f-40c5-b681-739da73ea439\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.443250 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc\") pod \"4954cd1d-111f-40c5-b681-739da73ea439\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.443270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config\") pod \"4954cd1d-111f-40c5-b681-739da73ea439\" (UID: \"4954cd1d-111f-40c5-b681-739da73ea439\") " Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.607745 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk" (OuterVolumeSpecName: "kube-api-access-tlmtk") pod "4954cd1d-111f-40c5-b681-739da73ea439" (UID: "4954cd1d-111f-40c5-b681-739da73ea439"). InnerVolumeSpecName "kube-api-access-tlmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.644555 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q2m8r"] Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.644867 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="init" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.644879 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="init" Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.644901 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.644907 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.645621 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4954cd1d-111f-40c5-b681-739da73ea439" containerName="dnsmasq-dns" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.646174 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.653646 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config" (OuterVolumeSpecName: "config") pod "4954cd1d-111f-40c5-b681-739da73ea439" (UID: "4954cd1d-111f-40c5-b681-739da73ea439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.653790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.654038 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.654190 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.674557 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kl6nk"] Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.680020 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.693102 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlmtk\" (UniqueName: \"kubernetes.io/projected/4954cd1d-111f-40c5-b681-739da73ea439-kube-api-access-tlmtk\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.693143 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.695988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4954cd1d-111f-40c5-b681-739da73ea439" (UID: "4954cd1d-111f-40c5-b681-739da73ea439"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.702115 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q2m8r"] Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.709064 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kl6nk"] Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.715102 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q2m8r"] Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.725736 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-6xkpz ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-q2m8r" podUID="60f093fa-8302-466d-8326-4f09f1c75810" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794305 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794346 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794383 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xkpz\" (UniqueName: \"kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qzs\" (UniqueName: \"kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794512 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794534 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794615 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.794801 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.794815 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: E1203 17:58:06.794868 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift podName:ab57f25f-0766-479b-ba47-e0b90c955b0d nodeName:}" failed. No retries permitted until 2025-12-03 17:58:07.794850431 +0000 UTC m=+1120.685545874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift") pod "swift-storage-0" (UID: "ab57f25f-0766-479b-ba47-e0b90c955b0d") : configmap "swift-ring-files" not found Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.794923 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4954cd1d-111f-40c5-b681-739da73ea439-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.896869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.896914 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.896938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.896958 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xkpz\" (UniqueName: \"kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.896988 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qzs\" (UniqueName: \"kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897006 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897045 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897085 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897102 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.897659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.898017 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.898059 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.898089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.898432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.898599 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.899110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.899197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.900729 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.901261 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.901655 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.901748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.902061 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.904049 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.913420 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qzs\" (UniqueName: \"kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs\") pod \"swift-ring-rebalance-kl6nk\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.913525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xkpz\" (UniqueName: \"kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz\") pod \"swift-ring-rebalance-q2m8r\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.959748 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerID="b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806" exitCode=0 Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.959842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" event={"ID":"d1765443-5034-4dc1-a4f6-c23f73da8dc3","Type":"ContainerDied","Data":"b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806"} Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.962479 4687 generic.go:334] "Generic (PLEG): container finished" podID="13a2fc61-ea0f-4044-93a3-0d18611d756c" containerID="6d28f5213bc051a200c0eefe79874555e279ad638615376238be2320335855ff" exitCode=0 Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.962570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" event={"ID":"13a2fc61-ea0f-4044-93a3-0d18611d756c","Type":"ContainerDied","Data":"6d28f5213bc051a200c0eefe79874555e279ad638615376238be2320335855ff"} Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.972908 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" event={"ID":"4954cd1d-111f-40c5-b681-739da73ea439","Type":"ContainerDied","Data":"d4fda51dd70cf6deb0c9ee5f8e3fdcc972d5e4d99ca189e1506a49353b5fe335"} Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.973570 4687 scope.go:117] "RemoveContainer" containerID="edd723eda3fae66589b6d7f02485709a92c2fee551845f6942abec7cabff6dbb" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.973800 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-574kj" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.981874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4sqs2" event={"ID":"53de0da8-3b25-403a-9956-79082a62780b","Type":"ContainerStarted","Data":"3d2b7ced0ba3c59b07b0bc28d0de571d39548233d8199513475f0e322a68549c"} Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.992875 4687 generic.go:334] "Generic (PLEG): container finished" podID="a3522527-5e9c-4148-89ea-890feca4df8b" containerID="5c5812d5efeb6ca0a75782a26009d999ca30f00dfaf874da56275e451bf0a944" exitCode=0 Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.992972 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:06 crc kubenswrapper[4687]: I1203 17:58:06.993720 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t5dt9" event={"ID":"a3522527-5e9c-4148-89ea-890feca4df8b","Type":"ContainerDied","Data":"5c5812d5efeb6ca0a75782a26009d999ca30f00dfaf874da56275e451bf0a944"} Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.017195 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.021642 4687 scope.go:117] "RemoveContainer" containerID="0fbc663e92ecc08431b3847ef10aef70166f73559f3cbaec8a98ff2088134971" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.027756 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4sqs2" podStartSLOduration=3.0277373929999998 podStartE2EDuration="3.027737393s" podCreationTimestamp="2025-12-03 17:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:07.016981013 +0000 UTC m=+1119.907676456" watchObservedRunningTime="2025-12-03 17:58:07.027737393 +0000 UTC m=+1119.918432836" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.032775 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.097884 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100576 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100632 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100658 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xkpz\" (UniqueName: \"kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100673 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.100690 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices\") pod \"60f093fa-8302-466d-8326-4f09f1c75810\" (UID: \"60f093fa-8302-466d-8326-4f09f1c75810\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.101282 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.101598 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts" (OuterVolumeSpecName: "scripts") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.102238 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.107110 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-574kj"] Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.108217 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.108253 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.118664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.130285 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz" (OuterVolumeSpecName: "kube-api-access-6xkpz") pod "60f093fa-8302-466d-8326-4f09f1c75810" (UID: "60f093fa-8302-466d-8326-4f09f1c75810"). InnerVolumeSpecName "kube-api-access-6xkpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201895 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xkpz\" (UniqueName: \"kubernetes.io/projected/60f093fa-8302-466d-8326-4f09f1c75810-kube-api-access-6xkpz\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201924 4687 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201934 4687 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201943 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60f093fa-8302-466d-8326-4f09f1c75810-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201951 4687 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201968 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60f093fa-8302-466d-8326-4f09f1c75810-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.201978 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/60f093fa-8302-466d-8326-4f09f1c75810-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.421383 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4954cd1d-111f-40c5-b681-739da73ea439" path="/var/lib/kubelet/pods/4954cd1d-111f-40c5-b681-739da73ea439/volumes" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.423051 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2d32d17-7c63-427d-ba0b-d45aceaea477" path="/var/lib/kubelet/pods/f2d32d17-7c63-427d-ba0b-d45aceaea477/volumes" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.464041 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.563682 4687 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 03 17:58:07 crc kubenswrapper[4687]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d1765443-5034-4dc1-a4f6-c23f73da8dc3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 03 17:58:07 crc kubenswrapper[4687]: > podSandboxID="42a06f6ff47a54f208d066b329329cb3430a03b9dcc6fbd7a37dfbca57f208c2" Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.563816 4687 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 03 17:58:07 crc kubenswrapper[4687]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsg6n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86db49b7ff-jrhlf_openstack(d1765443-5034-4dc1-a4f6-c23f73da8dc3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d1765443-5034-4dc1-a4f6-c23f73da8dc3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 03 17:58:07 crc kubenswrapper[4687]: > logger="UnhandledError" Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.565052 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d1765443-5034-4dc1-a4f6-c23f73da8dc3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.607239 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bvfh\" (UniqueName: \"kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh\") pod \"13a2fc61-ea0f-4044-93a3-0d18611d756c\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.607319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc\") pod \"13a2fc61-ea0f-4044-93a3-0d18611d756c\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.607540 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config\") pod \"13a2fc61-ea0f-4044-93a3-0d18611d756c\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.607586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb\") pod \"13a2fc61-ea0f-4044-93a3-0d18611d756c\" (UID: \"13a2fc61-ea0f-4044-93a3-0d18611d756c\") " Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.623713 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh" (OuterVolumeSpecName: "kube-api-access-6bvfh") pod "13a2fc61-ea0f-4044-93a3-0d18611d756c" (UID: "13a2fc61-ea0f-4044-93a3-0d18611d756c"). InnerVolumeSpecName "kube-api-access-6bvfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.670056 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13a2fc61-ea0f-4044-93a3-0d18611d756c" (UID: "13a2fc61-ea0f-4044-93a3-0d18611d756c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.674294 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config" (OuterVolumeSpecName: "config") pod "13a2fc61-ea0f-4044-93a3-0d18611d756c" (UID: "13a2fc61-ea0f-4044-93a3-0d18611d756c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.679691 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13a2fc61-ea0f-4044-93a3-0d18611d756c" (UID: "13a2fc61-ea0f-4044-93a3-0d18611d756c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.708528 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kl6nk"] Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.710300 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bvfh\" (UniqueName: \"kubernetes.io/projected/13a2fc61-ea0f-4044-93a3-0d18611d756c-kube-api-access-6bvfh\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.710323 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.710334 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.710346 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a2fc61-ea0f-4044-93a3-0d18611d756c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:07 crc kubenswrapper[4687]: I1203 17:58:07.811782 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.812036 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.812058 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 17:58:07 crc kubenswrapper[4687]: E1203 17:58:07.812139 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift podName:ab57f25f-0766-479b-ba47-e0b90c955b0d nodeName:}" failed. No retries permitted until 2025-12-03 17:58:09.812096914 +0000 UTC m=+1122.702792357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift") pod "swift-storage-0" (UID: "ab57f25f-0766-479b-ba47-e0b90c955b0d") : configmap "swift-ring-files" not found Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.006167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t5dt9" event={"ID":"a3522527-5e9c-4148-89ea-890feca4df8b","Type":"ContainerStarted","Data":"6208415cd4beae3b8ac7537acd5001d823e63987cefe1a508d2696b585e4f205"} Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.007199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.008271 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.008344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-c4zz6" event={"ID":"13a2fc61-ea0f-4044-93a3-0d18611d756c","Type":"ContainerDied","Data":"8a92733f4004d9dc4a82c3e7188c568e5ae5630535ccb0058a2f81fcc0feb4f8"} Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.008655 4687 scope.go:117] "RemoveContainer" containerID="6d28f5213bc051a200c0eefe79874555e279ad638615376238be2320335855ff" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.012320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe36f76e-b5b2-4dfe-923b-0516ea0af76f","Type":"ContainerStarted","Data":"fa65a92bded1cf47f60946c7cf1240c7963c74396f621533f1e2aa4055f2e300"} Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.012355 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fe36f76e-b5b2-4dfe-923b-0516ea0af76f","Type":"ContainerStarted","Data":"09c74d82fd8722e8e82cfbe6ad6f467be16f9962f6a5f18b5c7a097be0cb54a6"} Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.012878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.015244 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kl6nk" event={"ID":"9f72b95f-3e3d-49b4-8bca-8d391384a077","Type":"ContainerStarted","Data":"ac3eee83c967b07692aae124e8c4dbbc95ee8564abb8ac99d5c5c8fe584fecfb"} Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.015271 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q2m8r" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.046234 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-t5dt9" podStartSLOduration=4.046216258 podStartE2EDuration="4.046216258s" podCreationTimestamp="2025-12-03 17:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:08.024611237 +0000 UTC m=+1120.915306680" watchObservedRunningTime="2025-12-03 17:58:08.046216258 +0000 UTC m=+1120.936911691" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.070154 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q2m8r"] Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.087420 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-q2m8r"] Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.089199 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.6403058809999997 podStartE2EDuration="4.089186086s" podCreationTimestamp="2025-12-03 17:58:04 +0000 UTC" firstStartedPulling="2025-12-03 17:58:05.572954469 +0000 UTC m=+1118.463649902" lastFinishedPulling="2025-12-03 17:58:07.021834654 +0000 UTC m=+1119.912530107" observedRunningTime="2025-12-03 17:58:08.071564181 +0000 UTC m=+1120.962259614" watchObservedRunningTime="2025-12-03 17:58:08.089186086 +0000 UTC m=+1120.979881519" Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.128380 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:08 crc kubenswrapper[4687]: I1203 17:58:08.135616 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-c4zz6"] Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.026129 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" event={"ID":"d1765443-5034-4dc1-a4f6-c23f73da8dc3","Type":"ContainerStarted","Data":"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf"} Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.027323 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.054213 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" podStartSLOduration=5.054194181 podStartE2EDuration="5.054194181s" podCreationTimestamp="2025-12-03 17:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:09.047980825 +0000 UTC m=+1121.938676268" watchObservedRunningTime="2025-12-03 17:58:09.054194181 +0000 UTC m=+1121.944889614" Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.417759 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a2fc61-ea0f-4044-93a3-0d18611d756c" path="/var/lib/kubelet/pods/13a2fc61-ea0f-4044-93a3-0d18611d756c/volumes" Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.418460 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f093fa-8302-466d-8326-4f09f1c75810" path="/var/lib/kubelet/pods/60f093fa-8302-466d-8326-4f09f1c75810/volumes" Dec 03 17:58:09 crc kubenswrapper[4687]: I1203 17:58:09.853054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:09 crc kubenswrapper[4687]: E1203 17:58:09.853323 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 17:58:09 crc kubenswrapper[4687]: E1203 17:58:09.853345 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 17:58:09 crc kubenswrapper[4687]: E1203 17:58:09.853394 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift podName:ab57f25f-0766-479b-ba47-e0b90c955b0d nodeName:}" failed. No retries permitted until 2025-12-03 17:58:13.853377822 +0000 UTC m=+1126.744073255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift") pod "swift-storage-0" (UID: "ab57f25f-0766-479b-ba47-e0b90c955b0d") : configmap "swift-ring-files" not found Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.048383 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kl6nk" event={"ID":"9f72b95f-3e3d-49b4-8bca-8d391384a077","Type":"ContainerStarted","Data":"24e9958065ceb80ed68dafb777d6222e01e9c41a1a5786b36cec1b9b4d159792"} Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.069432 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kl6nk" podStartSLOduration=2.338450158 podStartE2EDuration="6.069415176s" podCreationTimestamp="2025-12-03 17:58:06 +0000 UTC" firstStartedPulling="2025-12-03 17:58:07.711188357 +0000 UTC m=+1120.601883790" lastFinishedPulling="2025-12-03 17:58:11.442153375 +0000 UTC m=+1124.332848808" observedRunningTime="2025-12-03 17:58:12.060883005 +0000 UTC m=+1124.951578478" watchObservedRunningTime="2025-12-03 17:58:12.069415176 +0000 UTC m=+1124.960110599" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.228337 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bb0e-account-create-update-qmlsr"] Dec 03 17:58:12 crc kubenswrapper[4687]: E1203 17:58:12.228824 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a2fc61-ea0f-4044-93a3-0d18611d756c" containerName="init" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.228856 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a2fc61-ea0f-4044-93a3-0d18611d756c" containerName="init" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.229175 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a2fc61-ea0f-4044-93a3-0d18611d756c" containerName="init" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.230037 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.231870 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.239576 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb0e-account-create-update-qmlsr"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.279832 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nrxds"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.285441 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.286107 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nrxds"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.291264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w692j\" (UniqueName: \"kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.291296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcs7w\" (UniqueName: \"kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.291437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.291587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.392828 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.392918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.392956 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w692j\" (UniqueName: \"kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.392972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcs7w\" (UniqueName: \"kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.394500 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.394522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.412267 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w692j\" (UniqueName: \"kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j\") pod \"keystone-db-create-nrxds\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.412561 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcs7w\" (UniqueName: \"kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w\") pod \"keystone-bb0e-account-create-update-qmlsr\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.497140 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wtd6t"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.528880 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wtd6t"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.529018 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.553731 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.567258 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3b20-account-create-update-qfsqf"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.568633 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.573584 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.600027 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqlv\" (UniqueName: \"kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.600105 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.600219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.600255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8jz\" (UniqueName: \"kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.602802 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.609503 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3b20-account-create-update-qfsqf"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.701790 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.702217 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.702250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl8jz\" (UniqueName: \"kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.702344 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxqlv\" (UniqueName: \"kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.702541 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.704689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.723621 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl8jz\" (UniqueName: \"kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz\") pod \"placement-db-create-wtd6t\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.723812 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxqlv\" (UniqueName: \"kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv\") pod \"placement-3b20-account-create-update-qfsqf\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.763661 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-cd6rr"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.764659 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.771770 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cd6rr"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.803963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.804041 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8t4h\" (UniqueName: \"kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.863383 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.868775 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8805-account-create-update-5csps"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.869795 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.873095 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.883786 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8805-account-create-update-5csps"] Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.905052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.905151 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7j8c\" (UniqueName: \"kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.905196 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.905239 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8t4h\" (UniqueName: \"kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.906416 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.948959 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8t4h\" (UniqueName: \"kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h\") pod \"glance-db-create-cd6rr\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:12 crc kubenswrapper[4687]: I1203 17:58:12.995528 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.007291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.007359 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7j8c\" (UniqueName: \"kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.008271 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.039805 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7j8c\" (UniqueName: \"kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c\") pod \"glance-8805-account-create-update-5csps\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.083940 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.088302 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bb0e-account-create-update-qmlsr"] Dec 03 17:58:13 crc kubenswrapper[4687]: W1203 17:58:13.100795 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd3a4272_20d4_4e23_99b3_5d8d3a729f16.slice/crio-d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5 WatchSource:0}: Error finding container d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5: Status 404 returned error can't find the container with id d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5 Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.122949 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nrxds"] Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.190696 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.425998 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wtd6t"] Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.516447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3b20-account-create-update-qfsqf"] Dec 03 17:58:13 crc kubenswrapper[4687]: W1203 17:58:13.521324 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4349de4c_40de_4644_8653_bdd32bf7f9fa.slice/crio-e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8 WatchSource:0}: Error finding container e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8: Status 404 returned error can't find the container with id e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8 Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.587289 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-cd6rr"] Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.721470 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8805-account-create-update-5csps"] Dec 03 17:58:13 crc kubenswrapper[4687]: W1203 17:58:13.753792 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af7e623_619c_4f5a_87ca_264e29b9043c.slice/crio-44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899 WatchSource:0}: Error finding container 44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899: Status 404 returned error can't find the container with id 44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899 Dec 03 17:58:13 crc kubenswrapper[4687]: I1203 17:58:13.924631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:13 crc kubenswrapper[4687]: E1203 17:58:13.924865 4687 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 03 17:58:13 crc kubenswrapper[4687]: E1203 17:58:13.925347 4687 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 03 17:58:13 crc kubenswrapper[4687]: E1203 17:58:13.925452 4687 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift podName:ab57f25f-0766-479b-ba47-e0b90c955b0d nodeName:}" failed. No retries permitted until 2025-12-03 17:58:21.925423644 +0000 UTC m=+1134.816119077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift") pod "swift-storage-0" (UID: "ab57f25f-0766-479b-ba47-e0b90c955b0d") : configmap "swift-ring-files" not found Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.096977 4687 generic.go:334] "Generic (PLEG): container finished" podID="355d39b5-af63-4905-909a-5ac6168fd205" containerID="54dc7d0c794505c600e646df7438d2ae3659cca109a5bddcc51466e30b8f7ef3" exitCode=0 Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.097037 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrxds" event={"ID":"355d39b5-af63-4905-909a-5ac6168fd205","Type":"ContainerDied","Data":"54dc7d0c794505c600e646df7438d2ae3659cca109a5bddcc51466e30b8f7ef3"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.097109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrxds" event={"ID":"355d39b5-af63-4905-909a-5ac6168fd205","Type":"ContainerStarted","Data":"d23f19c557d5cf081b79601bdacc08e60c5cd2d36588742f0bb291402543bedc"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.098815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8805-account-create-update-5csps" event={"ID":"2af7e623-619c-4f5a-87ca-264e29b9043c","Type":"ContainerStarted","Data":"4ebec572bef66c102c90eed60bb105e37e4e9fd63725aee7d8b9a0bc3a94173f"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.098855 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8805-account-create-update-5csps" event={"ID":"2af7e623-619c-4f5a-87ca-264e29b9043c","Type":"ContainerStarted","Data":"44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.101800 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b20-account-create-update-qfsqf" event={"ID":"4349de4c-40de-4644-8653-bdd32bf7f9fa","Type":"ContainerStarted","Data":"79f628c92d2f9b7861db3650f2be94626edb0b8a930f87b386e1048d7ec5c209"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.101873 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b20-account-create-update-qfsqf" event={"ID":"4349de4c-40de-4644-8653-bdd32bf7f9fa","Type":"ContainerStarted","Data":"e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.105947 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd3a4272-20d4-4e23-99b3-5d8d3a729f16" containerID="9f0db8844203c010e2ab55d844400ae5d870ddd94854f3f0f5970da104c655ae" exitCode=0 Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.106047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb0e-account-create-update-qmlsr" event={"ID":"cd3a4272-20d4-4e23-99b3-5d8d3a729f16","Type":"ContainerDied","Data":"9f0db8844203c010e2ab55d844400ae5d870ddd94854f3f0f5970da104c655ae"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.106083 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb0e-account-create-update-qmlsr" event={"ID":"cd3a4272-20d4-4e23-99b3-5d8d3a729f16","Type":"ContainerStarted","Data":"d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.107851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd6rr" event={"ID":"d16caedb-6156-48c7-b43f-b012eb8f49ab","Type":"ContainerStarted","Data":"993448c13f795ac7e1e3bf969a8213bafdaf8f829d44a8b9cf9738999fa20996"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.107926 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd6rr" event={"ID":"d16caedb-6156-48c7-b43f-b012eb8f49ab","Type":"ContainerStarted","Data":"25c1871cad958b1bb73fa43ae9a0db8872bc35f9fa64e293216b69e1853feb56"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.109659 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wtd6t" event={"ID":"75727b99-ff6a-438e-9cff-479f1b1331e3","Type":"ContainerStarted","Data":"37d302a38c36526d6c452de73aefbdad78287c6af21349121d021107fc3bc5a9"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.109741 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wtd6t" event={"ID":"75727b99-ff6a-438e-9cff-479f1b1331e3","Type":"ContainerStarted","Data":"7eb152e7b2fa527cbbb1061ea663ef01aac0791b8c9e111fe1106310c7b15b6b"} Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.111269 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.111316 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.111358 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.111841 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.111912 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a" gracePeriod=600 Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.157062 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-cd6rr" podStartSLOduration=2.157041402 podStartE2EDuration="2.157041402s" podCreationTimestamp="2025-12-03 17:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:14.130294691 +0000 UTC m=+1127.020990124" watchObservedRunningTime="2025-12-03 17:58:14.157041402 +0000 UTC m=+1127.047736835" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.171403 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-wtd6t" podStartSLOduration=2.171383658 podStartE2EDuration="2.171383658s" podCreationTimestamp="2025-12-03 17:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:14.144476993 +0000 UTC m=+1127.035172426" watchObservedRunningTime="2025-12-03 17:58:14.171383658 +0000 UTC m=+1127.062079091" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.181219 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3b20-account-create-update-qfsqf" podStartSLOduration=2.181197762 podStartE2EDuration="2.181197762s" podCreationTimestamp="2025-12-03 17:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:14.160163106 +0000 UTC m=+1127.050858539" watchObservedRunningTime="2025-12-03 17:58:14.181197762 +0000 UTC m=+1127.071893195" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.194455 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-8805-account-create-update-5csps" podStartSLOduration=2.194436508 podStartE2EDuration="2.194436508s" podCreationTimestamp="2025-12-03 17:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:14.186091873 +0000 UTC m=+1127.076787306" watchObservedRunningTime="2025-12-03 17:58:14.194436508 +0000 UTC m=+1127.085131941" Dec 03 17:58:14 crc kubenswrapper[4687]: I1203 17:58:14.751338 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.119777 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a" exitCode=0 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.119841 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.120110 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.120150 4687 scope.go:117] "RemoveContainer" containerID="15f3686b8b444d7ca51bf051ca58c72afb51a20e88ac7611ce3fcbdca0c8e6a0" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.121798 4687 generic.go:334] "Generic (PLEG): container finished" podID="75727b99-ff6a-438e-9cff-479f1b1331e3" containerID="37d302a38c36526d6c452de73aefbdad78287c6af21349121d021107fc3bc5a9" exitCode=0 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.121884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wtd6t" event={"ID":"75727b99-ff6a-438e-9cff-479f1b1331e3","Type":"ContainerDied","Data":"37d302a38c36526d6c452de73aefbdad78287c6af21349121d021107fc3bc5a9"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.123278 4687 generic.go:334] "Generic (PLEG): container finished" podID="2af7e623-619c-4f5a-87ca-264e29b9043c" containerID="4ebec572bef66c102c90eed60bb105e37e4e9fd63725aee7d8b9a0bc3a94173f" exitCode=0 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.123325 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8805-account-create-update-5csps" event={"ID":"2af7e623-619c-4f5a-87ca-264e29b9043c","Type":"ContainerDied","Data":"4ebec572bef66c102c90eed60bb105e37e4e9fd63725aee7d8b9a0bc3a94173f"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.124652 4687 generic.go:334] "Generic (PLEG): container finished" podID="4349de4c-40de-4644-8653-bdd32bf7f9fa" containerID="79f628c92d2f9b7861db3650f2be94626edb0b8a930f87b386e1048d7ec5c209" exitCode=0 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.124783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b20-account-create-update-qfsqf" event={"ID":"4349de4c-40de-4644-8653-bdd32bf7f9fa","Type":"ContainerDied","Data":"79f628c92d2f9b7861db3650f2be94626edb0b8a930f87b386e1048d7ec5c209"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.125941 4687 generic.go:334] "Generic (PLEG): container finished" podID="d16caedb-6156-48c7-b43f-b012eb8f49ab" containerID="993448c13f795ac7e1e3bf969a8213bafdaf8f829d44a8b9cf9738999fa20996" exitCode=0 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.126031 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd6rr" event={"ID":"d16caedb-6156-48c7-b43f-b012eb8f49ab","Type":"ContainerDied","Data":"993448c13f795ac7e1e3bf969a8213bafdaf8f829d44a8b9cf9738999fa20996"} Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.192292 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.268379 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.268923 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="dnsmasq-dns" containerID="cri-o://afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf" gracePeriod=10 Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.511134 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.517251 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.576901 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts\") pod \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.577038 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w692j\" (UniqueName: \"kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j\") pod \"355d39b5-af63-4905-909a-5ac6168fd205\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.577115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts\") pod \"355d39b5-af63-4905-909a-5ac6168fd205\" (UID: \"355d39b5-af63-4905-909a-5ac6168fd205\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.577193 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcs7w\" (UniqueName: \"kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w\") pod \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\" (UID: \"cd3a4272-20d4-4e23-99b3-5d8d3a729f16\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.578010 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd3a4272-20d4-4e23-99b3-5d8d3a729f16" (UID: "cd3a4272-20d4-4e23-99b3-5d8d3a729f16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.578089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "355d39b5-af63-4905-909a-5ac6168fd205" (UID: "355d39b5-af63-4905-909a-5ac6168fd205"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.587428 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w" (OuterVolumeSpecName: "kube-api-access-bcs7w") pod "cd3a4272-20d4-4e23-99b3-5d8d3a729f16" (UID: "cd3a4272-20d4-4e23-99b3-5d8d3a729f16"). InnerVolumeSpecName "kube-api-access-bcs7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.600363 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j" (OuterVolumeSpecName: "kube-api-access-w692j") pod "355d39b5-af63-4905-909a-5ac6168fd205" (UID: "355d39b5-af63-4905-909a-5ac6168fd205"). InnerVolumeSpecName "kube-api-access-w692j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.683081 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355d39b5-af63-4905-909a-5ac6168fd205-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.683116 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcs7w\" (UniqueName: \"kubernetes.io/projected/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-kube-api-access-bcs7w\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.683138 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd3a4272-20d4-4e23-99b3-5d8d3a729f16-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.683146 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w692j\" (UniqueName: \"kubernetes.io/projected/355d39b5-af63-4905-909a-5ac6168fd205-kube-api-access-w692j\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.763040 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.783642 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config\") pod \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.783699 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb\") pod \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.783878 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsg6n\" (UniqueName: \"kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n\") pod \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.783958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc\") pod \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.783986 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb\") pod \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\" (UID: \"d1765443-5034-4dc1-a4f6-c23f73da8dc3\") " Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.787327 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n" (OuterVolumeSpecName: "kube-api-access-fsg6n") pod "d1765443-5034-4dc1-a4f6-c23f73da8dc3" (UID: "d1765443-5034-4dc1-a4f6-c23f73da8dc3"). InnerVolumeSpecName "kube-api-access-fsg6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.828664 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config" (OuterVolumeSpecName: "config") pod "d1765443-5034-4dc1-a4f6-c23f73da8dc3" (UID: "d1765443-5034-4dc1-a4f6-c23f73da8dc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.857081 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1765443-5034-4dc1-a4f6-c23f73da8dc3" (UID: "d1765443-5034-4dc1-a4f6-c23f73da8dc3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.857304 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1765443-5034-4dc1-a4f6-c23f73da8dc3" (UID: "d1765443-5034-4dc1-a4f6-c23f73da8dc3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.862723 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1765443-5034-4dc1-a4f6-c23f73da8dc3" (UID: "d1765443-5034-4dc1-a4f6-c23f73da8dc3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.886082 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.886115 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.886140 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.886148 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1765443-5034-4dc1-a4f6-c23f73da8dc3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:15 crc kubenswrapper[4687]: I1203 17:58:15.886158 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsg6n\" (UniqueName: \"kubernetes.io/projected/d1765443-5034-4dc1-a4f6-c23f73da8dc3-kube-api-access-fsg6n\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.142576 4687 generic.go:334] "Generic (PLEG): container finished" podID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerID="afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf" exitCode=0 Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.142640 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" event={"ID":"d1765443-5034-4dc1-a4f6-c23f73da8dc3","Type":"ContainerDied","Data":"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf"} Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.142659 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.143077 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-jrhlf" event={"ID":"d1765443-5034-4dc1-a4f6-c23f73da8dc3","Type":"ContainerDied","Data":"42a06f6ff47a54f208d066b329329cb3430a03b9dcc6fbd7a37dfbca57f208c2"} Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.143167 4687 scope.go:117] "RemoveContainer" containerID="afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.155253 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nrxds" event={"ID":"355d39b5-af63-4905-909a-5ac6168fd205","Type":"ContainerDied","Data":"d23f19c557d5cf081b79601bdacc08e60c5cd2d36588742f0bb291402543bedc"} Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.155351 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23f19c557d5cf081b79601bdacc08e60c5cd2d36588742f0bb291402543bedc" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.155280 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nrxds" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.167757 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bb0e-account-create-update-qmlsr" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.167818 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bb0e-account-create-update-qmlsr" event={"ID":"cd3a4272-20d4-4e23-99b3-5d8d3a729f16","Type":"ContainerDied","Data":"d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5"} Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.167901 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d53364a7433c7e8c833f78e281b4d2f5146be20f066f081c0fb59bed0a3f69d5" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.188281 4687 scope.go:117] "RemoveContainer" containerID="b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.195972 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.212057 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-jrhlf"] Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.259522 4687 scope.go:117] "RemoveContainer" containerID="afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf" Dec 03 17:58:16 crc kubenswrapper[4687]: E1203 17:58:16.260133 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf\": container with ID starting with afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf not found: ID does not exist" containerID="afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.260244 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf"} err="failed to get container status \"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf\": rpc error: code = NotFound desc = could not find container \"afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf\": container with ID starting with afaefb296bbb03cd4220a945b84fc8bc3db0bd1f6b1f2928a67d7ef062e2bebf not found: ID does not exist" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.260331 4687 scope.go:117] "RemoveContainer" containerID="b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806" Dec 03 17:58:16 crc kubenswrapper[4687]: E1203 17:58:16.260642 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806\": container with ID starting with b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806 not found: ID does not exist" containerID="b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.260724 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806"} err="failed to get container status \"b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806\": rpc error: code = NotFound desc = could not find container \"b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806\": container with ID starting with b143986decae3c00af5cc65491390da5e7cd03b25a34539a7fa798510e962806 not found: ID does not exist" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.418820 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.496869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts\") pod \"d16caedb-6156-48c7-b43f-b012eb8f49ab\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.496929 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8t4h\" (UniqueName: \"kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h\") pod \"d16caedb-6156-48c7-b43f-b012eb8f49ab\" (UID: \"d16caedb-6156-48c7-b43f-b012eb8f49ab\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.497588 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d16caedb-6156-48c7-b43f-b012eb8f49ab" (UID: "d16caedb-6156-48c7-b43f-b012eb8f49ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.502948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h" (OuterVolumeSpecName: "kube-api-access-v8t4h") pod "d16caedb-6156-48c7-b43f-b012eb8f49ab" (UID: "d16caedb-6156-48c7-b43f-b012eb8f49ab"). InnerVolumeSpecName "kube-api-access-v8t4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.559043 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.599507 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d16caedb-6156-48c7-b43f-b012eb8f49ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.599552 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8t4h\" (UniqueName: \"kubernetes.io/projected/d16caedb-6156-48c7-b43f-b012eb8f49ab-kube-api-access-v8t4h\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.700687 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7j8c\" (UniqueName: \"kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c\") pod \"2af7e623-619c-4f5a-87ca-264e29b9043c\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.700761 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts\") pod \"2af7e623-619c-4f5a-87ca-264e29b9043c\" (UID: \"2af7e623-619c-4f5a-87ca-264e29b9043c\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.701694 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2af7e623-619c-4f5a-87ca-264e29b9043c" (UID: "2af7e623-619c-4f5a-87ca-264e29b9043c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.706560 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c" (OuterVolumeSpecName: "kube-api-access-k7j8c") pod "2af7e623-619c-4f5a-87ca-264e29b9043c" (UID: "2af7e623-619c-4f5a-87ca-264e29b9043c"). InnerVolumeSpecName "kube-api-access-k7j8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.784741 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.792742 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.803653 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl8jz\" (UniqueName: \"kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz\") pod \"75727b99-ff6a-438e-9cff-479f1b1331e3\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.803730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts\") pod \"4349de4c-40de-4644-8653-bdd32bf7f9fa\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.803779 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqlv\" (UniqueName: \"kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv\") pod \"4349de4c-40de-4644-8653-bdd32bf7f9fa\" (UID: \"4349de4c-40de-4644-8653-bdd32bf7f9fa\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.803813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts\") pod \"75727b99-ff6a-438e-9cff-479f1b1331e3\" (UID: \"75727b99-ff6a-438e-9cff-479f1b1331e3\") " Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.804094 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7j8c\" (UniqueName: \"kubernetes.io/projected/2af7e623-619c-4f5a-87ca-264e29b9043c-kube-api-access-k7j8c\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.804114 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2af7e623-619c-4f5a-87ca-264e29b9043c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.804657 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75727b99-ff6a-438e-9cff-479f1b1331e3" (UID: "75727b99-ff6a-438e-9cff-479f1b1331e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.805615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4349de4c-40de-4644-8653-bdd32bf7f9fa" (UID: "4349de4c-40de-4644-8653-bdd32bf7f9fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.809610 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv" (OuterVolumeSpecName: "kube-api-access-wxqlv") pod "4349de4c-40de-4644-8653-bdd32bf7f9fa" (UID: "4349de4c-40de-4644-8653-bdd32bf7f9fa"). InnerVolumeSpecName "kube-api-access-wxqlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.812975 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz" (OuterVolumeSpecName: "kube-api-access-hl8jz") pod "75727b99-ff6a-438e-9cff-479f1b1331e3" (UID: "75727b99-ff6a-438e-9cff-479f1b1331e3"). InnerVolumeSpecName "kube-api-access-hl8jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.905567 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75727b99-ff6a-438e-9cff-479f1b1331e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.906011 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl8jz\" (UniqueName: \"kubernetes.io/projected/75727b99-ff6a-438e-9cff-479f1b1331e3-kube-api-access-hl8jz\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.906031 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4349de4c-40de-4644-8653-bdd32bf7f9fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:16 crc kubenswrapper[4687]: I1203 17:58:16.906048 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxqlv\" (UniqueName: \"kubernetes.io/projected/4349de4c-40de-4644-8653-bdd32bf7f9fa-kube-api-access-wxqlv\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.183246 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wtd6t" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.188152 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wtd6t" event={"ID":"75727b99-ff6a-438e-9cff-479f1b1331e3","Type":"ContainerDied","Data":"7eb152e7b2fa527cbbb1061ea663ef01aac0791b8c9e111fe1106310c7b15b6b"} Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.188215 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb152e7b2fa527cbbb1061ea663ef01aac0791b8c9e111fe1106310c7b15b6b" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.190069 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8805-account-create-update-5csps" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.190061 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8805-account-create-update-5csps" event={"ID":"2af7e623-619c-4f5a-87ca-264e29b9043c","Type":"ContainerDied","Data":"44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899"} Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.190295 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44d13c38ae6f4653cd6d52dbf0127c10d3f3de73328ea1d4a023543e179f4899" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.192244 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3b20-account-create-update-qfsqf" event={"ID":"4349de4c-40de-4644-8653-bdd32bf7f9fa","Type":"ContainerDied","Data":"e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8"} Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.192293 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b3c9967943f120fb1a5e6941d7d02bbc7dc245e4559428213f9ac9e713fef8" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.192294 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3b20-account-create-update-qfsqf" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.212904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-cd6rr" event={"ID":"d16caedb-6156-48c7-b43f-b012eb8f49ab","Type":"ContainerDied","Data":"25c1871cad958b1bb73fa43ae9a0db8872bc35f9fa64e293216b69e1853feb56"} Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.212960 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c1871cad958b1bb73fa43ae9a0db8872bc35f9fa64e293216b69e1853feb56" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.212973 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-cd6rr" Dec 03 17:58:17 crc kubenswrapper[4687]: I1203 17:58:17.422782 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" path="/var/lib/kubelet/pods/d1765443-5034-4dc1-a4f6-c23f73da8dc3/volumes" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.087377 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6d87h"] Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088001 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3a4272-20d4-4e23-99b3-5d8d3a729f16" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088021 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3a4272-20d4-4e23-99b3-5d8d3a729f16" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088038 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355d39b5-af63-4905-909a-5ac6168fd205" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088047 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="355d39b5-af63-4905-909a-5ac6168fd205" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088059 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16caedb-6156-48c7-b43f-b012eb8f49ab" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088068 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16caedb-6156-48c7-b43f-b012eb8f49ab" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088106 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="dnsmasq-dns" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088513 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="dnsmasq-dns" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088537 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="init" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088546 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="init" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088580 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4349de4c-40de-4644-8653-bdd32bf7f9fa" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088590 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4349de4c-40de-4644-8653-bdd32bf7f9fa" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088614 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af7e623-619c-4f5a-87ca-264e29b9043c" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088624 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af7e623-619c-4f5a-87ca-264e29b9043c" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: E1203 17:58:18.088641 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75727b99-ff6a-438e-9cff-479f1b1331e3" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088657 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="75727b99-ff6a-438e-9cff-479f1b1331e3" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088882 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16caedb-6156-48c7-b43f-b012eb8f49ab" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088918 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="75727b99-ff6a-438e-9cff-479f1b1331e3" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088945 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="355d39b5-af63-4905-909a-5ac6168fd205" containerName="mariadb-database-create" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088959 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af7e623-619c-4f5a-87ca-264e29b9043c" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.088985 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1765443-5034-4dc1-a4f6-c23f73da8dc3" containerName="dnsmasq-dns" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.089000 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3a4272-20d4-4e23-99b3-5d8d3a729f16" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.089022 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4349de4c-40de-4644-8653-bdd32bf7f9fa" containerName="mariadb-account-create-update" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.089764 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.092687 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hh25f" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.095035 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.103437 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6d87h"] Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.225804 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.226378 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.226497 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.226624 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hcj\" (UniqueName: \"kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.329245 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.329324 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.329360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.329414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hcj\" (UniqueName: \"kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.335885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.344838 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.347945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.356334 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hcj\" (UniqueName: \"kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj\") pod \"glance-db-sync-6d87h\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:18 crc kubenswrapper[4687]: I1203 17:58:18.411199 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:19 crc kubenswrapper[4687]: I1203 17:58:19.005384 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6d87h"] Dec 03 17:58:19 crc kubenswrapper[4687]: I1203 17:58:19.233369 4687 generic.go:334] "Generic (PLEG): container finished" podID="9f72b95f-3e3d-49b4-8bca-8d391384a077" containerID="24e9958065ceb80ed68dafb777d6222e01e9c41a1a5786b36cec1b9b4d159792" exitCode=0 Dec 03 17:58:19 crc kubenswrapper[4687]: I1203 17:58:19.233467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kl6nk" event={"ID":"9f72b95f-3e3d-49b4-8bca-8d391384a077","Type":"ContainerDied","Data":"24e9958065ceb80ed68dafb777d6222e01e9c41a1a5786b36cec1b9b4d159792"} Dec 03 17:58:19 crc kubenswrapper[4687]: I1203 17:58:19.235289 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6d87h" event={"ID":"810e9e01-af1f-4a88-8858-76fc200db914","Type":"ContainerStarted","Data":"f496d40b245d79895e389297fb822262824ce126d2a941b7aaad10c51469e5ec"} Dec 03 17:58:19 crc kubenswrapper[4687]: I1203 17:58:19.995649 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.563910 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671542 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671611 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671641 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qzs\" (UniqueName: \"kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671667 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671690 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671725 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.671758 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift\") pod \"9f72b95f-3e3d-49b4-8bca-8d391384a077\" (UID: \"9f72b95f-3e3d-49b4-8bca-8d391384a077\") " Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.672294 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.672835 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.679479 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs" (OuterVolumeSpecName: "kube-api-access-w5qzs") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "kube-api-access-w5qzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.681036 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.695720 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts" (OuterVolumeSpecName: "scripts") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.701395 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.702332 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9f72b95f-3e3d-49b4-8bca-8d391384a077" (UID: "9f72b95f-3e3d-49b4-8bca-8d391384a077"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773749 4687 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773781 4687 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773792 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qzs\" (UniqueName: \"kubernetes.io/projected/9f72b95f-3e3d-49b4-8bca-8d391384a077-kube-api-access-w5qzs\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773802 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773810 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f72b95f-3e3d-49b4-8bca-8d391384a077-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773818 4687 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9f72b95f-3e3d-49b4-8bca-8d391384a077-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:20 crc kubenswrapper[4687]: I1203 17:58:20.773826 4687 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9f72b95f-3e3d-49b4-8bca-8d391384a077-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:21 crc kubenswrapper[4687]: I1203 17:58:21.254426 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kl6nk" event={"ID":"9f72b95f-3e3d-49b4-8bca-8d391384a077","Type":"ContainerDied","Data":"ac3eee83c967b07692aae124e8c4dbbc95ee8564abb8ac99d5c5c8fe584fecfb"} Dec 03 17:58:21 crc kubenswrapper[4687]: I1203 17:58:21.254783 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3eee83c967b07692aae124e8c4dbbc95ee8564abb8ac99d5c5c8fe584fecfb" Dec 03 17:58:21 crc kubenswrapper[4687]: I1203 17:58:21.254500 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kl6nk" Dec 03 17:58:21 crc kubenswrapper[4687]: I1203 17:58:21.996022 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:22 crc kubenswrapper[4687]: I1203 17:58:22.004255 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ab57f25f-0766-479b-ba47-e0b90c955b0d-etc-swift\") pod \"swift-storage-0\" (UID: \"ab57f25f-0766-479b-ba47-e0b90c955b0d\") " pod="openstack/swift-storage-0" Dec 03 17:58:22 crc kubenswrapper[4687]: I1203 17:58:22.249435 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 03 17:58:22 crc kubenswrapper[4687]: I1203 17:58:22.789818 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 03 17:58:22 crc kubenswrapper[4687]: W1203 17:58:22.794748 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab57f25f_0766_479b_ba47_e0b90c955b0d.slice/crio-0ac082b30596aaea5bd28c181975802aed919ef1d296d025537ce1103fa87361 WatchSource:0}: Error finding container 0ac082b30596aaea5bd28c181975802aed919ef1d296d025537ce1103fa87361: Status 404 returned error can't find the container with id 0ac082b30596aaea5bd28c181975802aed919ef1d296d025537ce1103fa87361 Dec 03 17:58:23 crc kubenswrapper[4687]: I1203 17:58:23.273770 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"0ac082b30596aaea5bd28c181975802aed919ef1d296d025537ce1103fa87361"} Dec 03 17:58:28 crc kubenswrapper[4687]: I1203 17:58:28.386318 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2lczs" podUID="3037eba1-1fab-4d56-a3f0-1cecb58b3f7a" containerName="ovn-controller" probeResult="failure" output=< Dec 03 17:58:28 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 17:58:28 crc kubenswrapper[4687]: > Dec 03 17:58:31 crc kubenswrapper[4687]: I1203 17:58:31.350499 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerID="89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16" exitCode=0 Dec 03 17:58:31 crc kubenswrapper[4687]: I1203 17:58:31.350585 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerDied","Data":"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.363985 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerStarted","Data":"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.364314 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.368408 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6d87h" event={"ID":"810e9e01-af1f-4a88-8858-76fc200db914","Type":"ContainerStarted","Data":"13106156a8a727d0e7eb3d857d8c0a5162f994b405c18ba1a79eb75addab621c"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.370905 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"9bdd694d511edcd5022f3f8e8f9f259ecaf6b741f4e393d5f775b0b706cdb277"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.370955 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"07f155876cbe7d8c58a953a7baf4562da7943354f50164842b53db436ccb8f1d"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.370971 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"c91d7955f325c351b13ab942e9f9248698b7494f8e3e71e65ceb3cfe7790d127"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.370984 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"4dee48ddee9900d1a51dcbd506c3c3bcce061a15c1fbed72f785dc38bf59655d"} Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.416951 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.293333125 podStartE2EDuration="1m5.416931385s" podCreationTimestamp="2025-12-03 17:57:27 +0000 UTC" firstStartedPulling="2025-12-03 17:57:29.749470607 +0000 UTC m=+1082.640166040" lastFinishedPulling="2025-12-03 17:57:56.873068867 +0000 UTC m=+1109.763764300" observedRunningTime="2025-12-03 17:58:32.392530618 +0000 UTC m=+1145.283226071" watchObservedRunningTime="2025-12-03 17:58:32.416931385 +0000 UTC m=+1145.307626818" Dec 03 17:58:32 crc kubenswrapper[4687]: I1203 17:58:32.417867 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6d87h" podStartSLOduration=2.02258545 podStartE2EDuration="14.41784896s" podCreationTimestamp="2025-12-03 17:58:18 +0000 UTC" firstStartedPulling="2025-12-03 17:58:19.014330829 +0000 UTC m=+1131.905026262" lastFinishedPulling="2025-12-03 17:58:31.409594339 +0000 UTC m=+1144.300289772" observedRunningTime="2025-12-03 17:58:32.411692873 +0000 UTC m=+1145.302388316" watchObservedRunningTime="2025-12-03 17:58:32.41784896 +0000 UTC m=+1145.308544393" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.390574 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-2lczs" podUID="3037eba1-1fab-4d56-a3f0-1cecb58b3f7a" containerName="ovn-controller" probeResult="failure" output=< Dec 03 17:58:33 crc kubenswrapper[4687]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 03 17:58:33 crc kubenswrapper[4687]: > Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.404863 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.432528 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-gtnmq" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.648219 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lczs-config-7gr4k"] Dec 03 17:58:33 crc kubenswrapper[4687]: E1203 17:58:33.648883 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f72b95f-3e3d-49b4-8bca-8d391384a077" containerName="swift-ring-rebalance" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.648896 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f72b95f-3e3d-49b4-8bca-8d391384a077" containerName="swift-ring-rebalance" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.649062 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f72b95f-3e3d-49b4-8bca-8d391384a077" containerName="swift-ring-rebalance" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.649636 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.655995 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.657494 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs-config-7gr4k"] Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.807997 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.808097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.808145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.808246 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cmt\" (UniqueName: \"kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.808290 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.808363 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909539 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cmt\" (UniqueName: \"kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909609 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.909634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.910357 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.910361 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.910358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.910358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.912592 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.933353 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cmt\" (UniqueName: \"kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt\") pod \"ovn-controller-2lczs-config-7gr4k\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:33 crc kubenswrapper[4687]: I1203 17:58:33.968750 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:34 crc kubenswrapper[4687]: I1203 17:58:34.392490 4687 generic.go:334] "Generic (PLEG): container finished" podID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerID="51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d" exitCode=0 Dec 03 17:58:34 crc kubenswrapper[4687]: I1203 17:58:34.392580 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerDied","Data":"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d"} Dec 03 17:58:34 crc kubenswrapper[4687]: I1203 17:58:34.631367 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs-config-7gr4k"] Dec 03 17:58:34 crc kubenswrapper[4687]: W1203 17:58:34.740387 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod175785be_633c_439b_80e7_ac9d06d9b839.slice/crio-c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf WatchSource:0}: Error finding container c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf: Status 404 returned error can't find the container with id c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.416758 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"f5f2ae2397a59f3d5e11ab78b272380f591c8c7aca705d0c238104430cf25b8f"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417032 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"bf61e2fbf6ccbb613184d3539f73d79e085f7c8291bdd21abcd3829457837fa7"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417044 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"7e2dea329a0f424ecc538c7201d64606022d60754cdc01f208e8f13c0aa44503"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417054 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerStarted","Data":"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417070 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-7gr4k" event={"ID":"175785be-633c-439b-80e7-ac9d06d9b839","Type":"ContainerStarted","Data":"bde4df2cb5523fa2ccdbeb36fb7889e5cc3ec53ddbd8aa5cafb52fd5e9c002e8"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417081 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-7gr4k" event={"ID":"175785be-633c-439b-80e7-ac9d06d9b839","Type":"ContainerStarted","Data":"c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf"} Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.417423 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.434327 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-2lczs-config-7gr4k" podStartSLOduration=2.434309366 podStartE2EDuration="2.434309366s" podCreationTimestamp="2025-12-03 17:58:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:35.431258644 +0000 UTC m=+1148.321954077" watchObservedRunningTime="2025-12-03 17:58:35.434309366 +0000 UTC m=+1148.325004799" Dec 03 17:58:35 crc kubenswrapper[4687]: I1203 17:58:35.454246 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371969.400549 podStartE2EDuration="1m7.454226133s" podCreationTimestamp="2025-12-03 17:57:28 +0000 UTC" firstStartedPulling="2025-12-03 17:57:30.041811011 +0000 UTC m=+1082.932506444" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:35.453533664 +0000 UTC m=+1148.344229107" watchObservedRunningTime="2025-12-03 17:58:35.454226133 +0000 UTC m=+1148.344921566" Dec 03 17:58:36 crc kubenswrapper[4687]: I1203 17:58:36.440800 4687 generic.go:334] "Generic (PLEG): container finished" podID="175785be-633c-439b-80e7-ac9d06d9b839" containerID="bde4df2cb5523fa2ccdbeb36fb7889e5cc3ec53ddbd8aa5cafb52fd5e9c002e8" exitCode=0 Dec 03 17:58:36 crc kubenswrapper[4687]: I1203 17:58:36.441260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-7gr4k" event={"ID":"175785be-633c-439b-80e7-ac9d06d9b839","Type":"ContainerDied","Data":"bde4df2cb5523fa2ccdbeb36fb7889e5cc3ec53ddbd8aa5cafb52fd5e9c002e8"} Dec 03 17:58:36 crc kubenswrapper[4687]: I1203 17:58:36.452552 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"0e29d03af5f9692a4b3a2164559e2302d49fcb3c632b4cd5a9ab2cd3c974fc8b"} Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.465551 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"21c4259b0a33e036b844f9b598fc463e7e9d65b452af1fef675a796c37bb1342"} Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.466090 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"d9206c944bb297983fb632d1ab121358a89b8d51a4a655dfe1ef2b9c4f1ebfc2"} Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.466115 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"c554d2fde8b5189a5a3338fa28222f6b444ed53e60dd6741c2faceaa8132ec2f"} Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.466146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"ddac570004d644be255e9d426067dc5e0402488a8630b34399b2ba877f7f69de"} Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.841043 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878390 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878735 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878831 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878843 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.878903 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cmt\" (UniqueName: \"kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879025 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879114 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts\") pod \"175785be-633c-439b-80e7-ac9d06d9b839\" (UID: \"175785be-633c-439b-80e7-ac9d06d9b839\") " Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879619 4687 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879640 4687 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts" (OuterVolumeSpecName: "scripts") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.879992 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.880104 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run" (OuterVolumeSpecName: "var-run") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.922403 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt" (OuterVolumeSpecName: "kube-api-access-f5cmt") pod "175785be-633c-439b-80e7-ac9d06d9b839" (UID: "175785be-633c-439b-80e7-ac9d06d9b839"). InnerVolumeSpecName "kube-api-access-f5cmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.980796 4687 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/175785be-633c-439b-80e7-ac9d06d9b839-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.980832 4687 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.980843 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/175785be-633c-439b-80e7-ac9d06d9b839-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:37 crc kubenswrapper[4687]: I1203 17:58:37.980854 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cmt\" (UniqueName: \"kubernetes.io/projected/175785be-633c-439b-80e7-ac9d06d9b839-kube-api-access-f5cmt\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.398610 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-2lczs" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.486868 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"488831c72ccde512a2a34eb2e3d99a2969ba4e4f9b8877fe60f6b25f39031406"} Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.487201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"52f75310dfef8d47d0bb8614c538e54c6646454ef429b689af96538e899b21c5"} Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.487216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ab57f25f-0766-479b-ba47-e0b90c955b0d","Type":"ContainerStarted","Data":"5db7864626d0ab452d66e55b5ecda66e4eea051539cbf70797f23e8856d6b5e3"} Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.490640 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-7gr4k" event={"ID":"175785be-633c-439b-80e7-ac9d06d9b839","Type":"ContainerDied","Data":"c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf"} Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.490671 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ff29bb9fc8b2ccfe256ade02eec539dd6b344bcb5e403d8ea851434e9c6edf" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.490676 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-7gr4k" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.534563 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.923304247 podStartE2EDuration="34.534514399s" podCreationTimestamp="2025-12-03 17:58:04 +0000 UTC" firstStartedPulling="2025-12-03 17:58:22.797542464 +0000 UTC m=+1135.688237897" lastFinishedPulling="2025-12-03 17:58:36.408752616 +0000 UTC m=+1149.299448049" observedRunningTime="2025-12-03 17:58:38.528185309 +0000 UTC m=+1151.418880752" watchObservedRunningTime="2025-12-03 17:58:38.534514399 +0000 UTC m=+1151.425209852" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.556209 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2lczs-config-7gr4k"] Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.563583 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2lczs-config-7gr4k"] Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.662625 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-2lczs-config-rr7w4"] Dec 03 17:58:38 crc kubenswrapper[4687]: E1203 17:58:38.662995 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175785be-633c-439b-80e7-ac9d06d9b839" containerName="ovn-config" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.663015 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="175785be-633c-439b-80e7-ac9d06d9b839" containerName="ovn-config" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.663222 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="175785be-633c-439b-80e7-ac9d06d9b839" containerName="ovn-config" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.663849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.666250 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.675038 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs-config-rr7w4"] Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691100 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691199 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8j6\" (UniqueName: \"kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691293 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691313 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.691357 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792706 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792772 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8j6\" (UniqueName: \"kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792861 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.792986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.793279 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.793298 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.793280 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.793460 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.794435 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.795349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.817310 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.820045 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.820050 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.834617 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8j6\" (UniqueName: \"kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6\") pod \"ovn-controller-2lczs-config-rr7w4\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.986621 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996074 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996213 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:38 crc kubenswrapper[4687]: I1203 17:58:38.996527 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvdv\" (UniqueName: \"kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvdv\" (UniqueName: \"kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099586 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.099650 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.100471 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.100954 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.101857 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.102473 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.102681 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.120108 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvdv\" (UniqueName: \"kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv\") pod \"dnsmasq-dns-764c5664d7-nbwnb\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.180017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.422351 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="175785be-633c-439b-80e7-ac9d06d9b839" path="/var/lib/kubelet/pods/175785be-633c-439b-80e7-ac9d06d9b839/volumes" Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.447097 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-2lczs-config-rr7w4"] Dec 03 17:58:39 crc kubenswrapper[4687]: W1203 17:58:39.456032 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod693f6e76_790f_43cc_83f1_5c6661b94f68.slice/crio-a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba WatchSource:0}: Error finding container a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba: Status 404 returned error can't find the container with id a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.500181 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-rr7w4" event={"ID":"693f6e76-790f-43cc-83f1-5c6661b94f68","Type":"ContainerStarted","Data":"a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba"} Dec 03 17:58:39 crc kubenswrapper[4687]: I1203 17:58:39.624950 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:39 crc kubenswrapper[4687]: W1203 17:58:39.627472 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2b96d4_525a_4b95_b533_943a10f84e27.slice/crio-73a2f353426f3506f2f1f5d8642402ffee5dc034a7219ae4f726439ad2aedfe7 WatchSource:0}: Error finding container 73a2f353426f3506f2f1f5d8642402ffee5dc034a7219ae4f726439ad2aedfe7: Status 404 returned error can't find the container with id 73a2f353426f3506f2f1f5d8642402ffee5dc034a7219ae4f726439ad2aedfe7 Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.510910 4687 generic.go:334] "Generic (PLEG): container finished" podID="810e9e01-af1f-4a88-8858-76fc200db914" containerID="13106156a8a727d0e7eb3d857d8c0a5162f994b405c18ba1a79eb75addab621c" exitCode=0 Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.511003 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6d87h" event={"ID":"810e9e01-af1f-4a88-8858-76fc200db914","Type":"ContainerDied","Data":"13106156a8a727d0e7eb3d857d8c0a5162f994b405c18ba1a79eb75addab621c"} Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.515570 4687 generic.go:334] "Generic (PLEG): container finished" podID="693f6e76-790f-43cc-83f1-5c6661b94f68" containerID="2500bb0115d90671aac1ea144a4b3a848d70e4aa19b5292498a410bfdb36ae26" exitCode=0 Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.515688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-rr7w4" event={"ID":"693f6e76-790f-43cc-83f1-5c6661b94f68","Type":"ContainerDied","Data":"2500bb0115d90671aac1ea144a4b3a848d70e4aa19b5292498a410bfdb36ae26"} Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.517710 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerID="55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5" exitCode=0 Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.517755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" event={"ID":"fc2b96d4-525a-4b95-b533-943a10f84e27","Type":"ContainerDied","Data":"55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5"} Dec 03 17:58:40 crc kubenswrapper[4687]: I1203 17:58:40.517783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" event={"ID":"fc2b96d4-525a-4b95-b533-943a10f84e27","Type":"ContainerStarted","Data":"73a2f353426f3506f2f1f5d8642402ffee5dc034a7219ae4f726439ad2aedfe7"} Dec 03 17:58:41 crc kubenswrapper[4687]: I1203 17:58:41.531291 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" event={"ID":"fc2b96d4-525a-4b95-b533-943a10f84e27","Type":"ContainerStarted","Data":"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229"} Dec 03 17:58:41 crc kubenswrapper[4687]: I1203 17:58:41.533528 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:41 crc kubenswrapper[4687]: I1203 17:58:41.572613 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" podStartSLOduration=3.572589029 podStartE2EDuration="3.572589029s" podCreationTimestamp="2025-12-03 17:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:41.566724991 +0000 UTC m=+1154.457420444" watchObservedRunningTime="2025-12-03 17:58:41.572589029 +0000 UTC m=+1154.463284462" Dec 03 17:58:41 crc kubenswrapper[4687]: I1203 17:58:41.958535 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.041047 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.155883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data\") pod \"810e9e01-af1f-4a88-8858-76fc200db914\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.155951 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.155989 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle\") pod \"810e9e01-af1f-4a88-8858-76fc200db914\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156094 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data\") pod \"810e9e01-af1f-4a88-8858-76fc200db914\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156161 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j8j6\" (UniqueName: \"kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156242 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156266 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9hcj\" (UniqueName: \"kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj\") pod \"810e9e01-af1f-4a88-8858-76fc200db914\" (UID: \"810e9e01-af1f-4a88-8858-76fc200db914\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156368 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn\") pod \"693f6e76-790f-43cc-83f1-5c6661b94f68\" (UID: \"693f6e76-790f-43cc-83f1-5c6661b94f68\") " Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156726 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.156948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.157022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.157084 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run" (OuterVolumeSpecName: "var-run") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.158354 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts" (OuterVolumeSpecName: "scripts") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.161892 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6" (OuterVolumeSpecName: "kube-api-access-7j8j6") pod "693f6e76-790f-43cc-83f1-5c6661b94f68" (UID: "693f6e76-790f-43cc-83f1-5c6661b94f68"). InnerVolumeSpecName "kube-api-access-7j8j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.174883 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "810e9e01-af1f-4a88-8858-76fc200db914" (UID: "810e9e01-af1f-4a88-8858-76fc200db914"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.177170 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj" (OuterVolumeSpecName: "kube-api-access-h9hcj") pod "810e9e01-af1f-4a88-8858-76fc200db914" (UID: "810e9e01-af1f-4a88-8858-76fc200db914"). InnerVolumeSpecName "kube-api-access-h9hcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.179221 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "810e9e01-af1f-4a88-8858-76fc200db914" (UID: "810e9e01-af1f-4a88-8858-76fc200db914"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.204978 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data" (OuterVolumeSpecName: "config-data") pod "810e9e01-af1f-4a88-8858-76fc200db914" (UID: "810e9e01-af1f-4a88-8858-76fc200db914"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258310 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j8j6\" (UniqueName: \"kubernetes.io/projected/693f6e76-790f-43cc-83f1-5c6661b94f68-kube-api-access-7j8j6\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258340 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258350 4687 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258358 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9hcj\" (UniqueName: \"kubernetes.io/projected/810e9e01-af1f-4a88-8858-76fc200db914-kube-api-access-h9hcj\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258367 4687 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258376 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258386 4687 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/693f6e76-790f-43cc-83f1-5c6661b94f68-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258394 4687 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/693f6e76-790f-43cc-83f1-5c6661b94f68-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258402 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.258409 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/810e9e01-af1f-4a88-8858-76fc200db914-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.543546 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6d87h" event={"ID":"810e9e01-af1f-4a88-8858-76fc200db914","Type":"ContainerDied","Data":"f496d40b245d79895e389297fb822262824ce126d2a941b7aaad10c51469e5ec"} Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.543599 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f496d40b245d79895e389297fb822262824ce126d2a941b7aaad10c51469e5ec" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.543694 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6d87h" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.557593 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-2lczs-config-rr7w4" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.557683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-2lczs-config-rr7w4" event={"ID":"693f6e76-790f-43cc-83f1-5c6661b94f68","Type":"ContainerDied","Data":"a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba"} Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.557726 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a394746d5c8ffe713c809e520fabf45d01bdfcf2d987a78b8d0e2f49d3861fba" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.922011 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.952469 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:58:42 crc kubenswrapper[4687]: E1203 17:58:42.952850 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="693f6e76-790f-43cc-83f1-5c6661b94f68" containerName="ovn-config" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.952878 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="693f6e76-790f-43cc-83f1-5c6661b94f68" containerName="ovn-config" Dec 03 17:58:42 crc kubenswrapper[4687]: E1203 17:58:42.952904 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810e9e01-af1f-4a88-8858-76fc200db914" containerName="glance-db-sync" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.952913 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="810e9e01-af1f-4a88-8858-76fc200db914" containerName="glance-db-sync" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.953106 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="693f6e76-790f-43cc-83f1-5c6661b94f68" containerName="ovn-config" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.953156 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="810e9e01-af1f-4a88-8858-76fc200db914" containerName="glance-db-sync" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.954139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974137 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974228 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bzt\" (UniqueName: \"kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.974285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:42 crc kubenswrapper[4687]: I1203 17:58:42.993723 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.063917 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-2lczs-config-rr7w4"] Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.072447 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-2lczs-config-rr7w4"] Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079184 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bzt\" (UniqueName: \"kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079270 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079383 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.079429 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.080201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.080716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.081434 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.082256 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.082437 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.098005 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bzt\" (UniqueName: \"kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt\") pod \"dnsmasq-dns-74f6bcbc87-w94zw\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.281338 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.419847 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="693f6e76-790f-43cc-83f1-5c6661b94f68" path="/var/lib/kubelet/pods/693f6e76-790f-43cc-83f1-5c6661b94f68/volumes" Dec 03 17:58:43 crc kubenswrapper[4687]: I1203 17:58:43.749245 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:58:44 crc kubenswrapper[4687]: I1203 17:58:44.577233 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerID="13b2ea95ad88f7f4509ff4daf1b6684e49058fcfb3ed02f032aae4b55fd66120" exitCode=0 Dec 03 17:58:44 crc kubenswrapper[4687]: I1203 17:58:44.577714 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="dnsmasq-dns" containerID="cri-o://87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229" gracePeriod=10 Dec 03 17:58:44 crc kubenswrapper[4687]: I1203 17:58:44.577403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" event={"ID":"4e1edd4e-b3e9-40ca-8cb1-86380336a2db","Type":"ContainerDied","Data":"13b2ea95ad88f7f4509ff4daf1b6684e49058fcfb3ed02f032aae4b55fd66120"} Dec 03 17:58:44 crc kubenswrapper[4687]: I1203 17:58:44.578011 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" event={"ID":"4e1edd4e-b3e9-40ca-8cb1-86380336a2db","Type":"ContainerStarted","Data":"bf93da3441ac2b7d3014e0db8c8fdb9a2e643787f70cf032bc05fbe1caaa0d34"} Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.037245 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112519 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112587 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112626 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112683 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112709 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvvdv\" (UniqueName: \"kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.112752 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config\") pod \"fc2b96d4-525a-4b95-b533-943a10f84e27\" (UID: \"fc2b96d4-525a-4b95-b533-943a10f84e27\") " Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.121313 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv" (OuterVolumeSpecName: "kube-api-access-cvvdv") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "kube-api-access-cvvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.157941 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.172857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.174912 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config" (OuterVolumeSpecName: "config") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.196603 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.198875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fc2b96d4-525a-4b95-b533-943a10f84e27" (UID: "fc2b96d4-525a-4b95-b533-943a10f84e27"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214416 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214466 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214479 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214490 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214499 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvvdv\" (UniqueName: \"kubernetes.io/projected/fc2b96d4-525a-4b95-b533-943a10f84e27-kube-api-access-cvvdv\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.214508 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc2b96d4-525a-4b95-b533-943a10f84e27-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.588409 4687 generic.go:334] "Generic (PLEG): container finished" podID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerID="87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229" exitCode=0 Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.588491 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.588486 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" event={"ID":"fc2b96d4-525a-4b95-b533-943a10f84e27","Type":"ContainerDied","Data":"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229"} Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.589012 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nbwnb" event={"ID":"fc2b96d4-525a-4b95-b533-943a10f84e27","Type":"ContainerDied","Data":"73a2f353426f3506f2f1f5d8642402ffee5dc034a7219ae4f726439ad2aedfe7"} Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.589042 4687 scope.go:117] "RemoveContainer" containerID="87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.591541 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" event={"ID":"4e1edd4e-b3e9-40ca-8cb1-86380336a2db","Type":"ContainerStarted","Data":"3538985e90b2abd590e86e26cf646b003bd893e2467a8de7ce58eb1abaf1a7fe"} Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.591692 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.616179 4687 scope.go:117] "RemoveContainer" containerID="55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.619330 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podStartSLOduration=3.619314239 podStartE2EDuration="3.619314239s" podCreationTimestamp="2025-12-03 17:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:58:45.612028843 +0000 UTC m=+1158.502724286" watchObservedRunningTime="2025-12-03 17:58:45.619314239 +0000 UTC m=+1158.510009672" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.641462 4687 scope.go:117] "RemoveContainer" containerID="87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229" Dec 03 17:58:45 crc kubenswrapper[4687]: E1203 17:58:45.642191 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229\": container with ID starting with 87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229 not found: ID does not exist" containerID="87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.642233 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229"} err="failed to get container status \"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229\": rpc error: code = NotFound desc = could not find container \"87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229\": container with ID starting with 87784c1ae55ec29735124948515944442a54096e12f7f440c9ccad8959ca6229 not found: ID does not exist" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.642258 4687 scope.go:117] "RemoveContainer" containerID="55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5" Dec 03 17:58:45 crc kubenswrapper[4687]: E1203 17:58:45.642691 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5\": container with ID starting with 55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5 not found: ID does not exist" containerID="55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.642745 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5"} err="failed to get container status \"55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5\": rpc error: code = NotFound desc = could not find container \"55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5\": container with ID starting with 55e73e33a3de3b0d01582ad71b643bbac269c2a9963e8cf017559148091fd6c5 not found: ID does not exist" Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.643861 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:45 crc kubenswrapper[4687]: I1203 17:58:45.650063 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nbwnb"] Dec 03 17:58:47 crc kubenswrapper[4687]: I1203 17:58:47.423630 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" path="/var/lib/kubelet/pods/fc2b96d4-525a-4b95-b533-943a10f84e27/volumes" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.106421 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.442844 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zsnhj"] Dec 03 17:58:49 crc kubenswrapper[4687]: E1203 17:58:49.443170 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="init" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.443182 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="init" Dec 03 17:58:49 crc kubenswrapper[4687]: E1203 17:58:49.443204 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="dnsmasq-dns" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.443211 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="dnsmasq-dns" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.443361 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2b96d4-525a-4b95-b533-943a10f84e27" containerName="dnsmasq-dns" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.443841 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.456188 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zsnhj"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.457293 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.552629 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3681-account-create-update-8hzv5"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.553830 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.556586 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.566912 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3681-account-create-update-8hzv5"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.585746 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.586157 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mwnb\" (UniqueName: \"kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.639821 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2fnzt"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.640957 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.650220 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2fnzt"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.687070 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mwnb\" (UniqueName: \"kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.687238 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fwg4\" (UniqueName: \"kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.687294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.687337 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.688006 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.708010 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mwnb\" (UniqueName: \"kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb\") pod \"barbican-db-create-zsnhj\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.763027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.763176 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c684-account-create-update-2v9t5"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.764344 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.767830 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.797351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.797422 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwhp\" (UniqueName: \"kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.797470 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fwg4\" (UniqueName: \"kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.797517 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.798478 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.803576 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c684-account-create-update-2v9t5"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.821277 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fwg4\" (UniqueName: \"kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4\") pod \"barbican-3681-account-create-update-8hzv5\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.867006 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-g957f"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.868166 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g957f" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.874906 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.892136 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g957f"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.899447 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.899554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.899592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtwhp\" (UniqueName: \"kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.899664 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xqd\" (UniqueName: \"kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.900446 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.962439 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fl77x"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.963712 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.966268 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtwhp\" (UniqueName: \"kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp\") pod \"cinder-db-create-2fnzt\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.970725 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ch9hz" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.970973 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.971114 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.971221 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.978755 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fl77x"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.989020 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-111f-account-create-update-mqbbx"] Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.990285 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.992362 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 03 17:58:49 crc kubenswrapper[4687]: I1203 17:58:49.999092 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-111f-account-create-update-mqbbx"] Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.003973 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.004067 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.004218 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cf4w\" (UniqueName: \"kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.004425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xqd\" (UniqueName: \"kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.004783 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.022190 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xqd\" (UniqueName: \"kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd\") pod \"cinder-c684-account-create-update-2v9t5\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.090752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.111579 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.111724 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnszd\" (UniqueName: \"kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.111768 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.111841 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.111982 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cf4w\" (UniqueName: \"kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.112005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqvjr\" (UniqueName: \"kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.112066 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.113111 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.133393 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cf4w\" (UniqueName: \"kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w\") pod \"neutron-db-create-g957f\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.194250 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g957f" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.219031 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnszd\" (UniqueName: \"kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.219133 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.219208 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqvjr\" (UniqueName: \"kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.219242 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.219314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.220077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.232609 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.239803 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.256402 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnszd\" (UniqueName: \"kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd\") pod \"keystone-db-sync-fl77x\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.256420 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.257798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqvjr\" (UniqueName: \"kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr\") pod \"neutron-111f-account-create-update-mqbbx\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.373981 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zsnhj"] Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.391464 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fl77x" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.402758 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.558962 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3681-account-create-update-8hzv5"] Dec 03 17:58:50 crc kubenswrapper[4687]: W1203 17:58:50.599231 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa9d192d_f4d8_4b1e_b32e_f4b9de7416e9.slice/crio-20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7 WatchSource:0}: Error finding container 20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7: Status 404 returned error can't find the container with id 20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7 Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.647498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zsnhj" event={"ID":"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9","Type":"ContainerStarted","Data":"7d71fc59a3856212feacda1d0b90e5833834be17cc472d17050ff611f68322c2"} Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.648360 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3681-account-create-update-8hzv5" event={"ID":"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9","Type":"ContainerStarted","Data":"20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7"} Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.651498 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c684-account-create-update-2v9t5"] Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.871520 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g957f"] Dec 03 17:58:50 crc kubenswrapper[4687]: W1203 17:58:50.874431 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b39c370_c8bc_4811_a7d3_75e3dd59450c.slice/crio-d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6 WatchSource:0}: Error finding container d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6: Status 404 returned error can't find the container with id d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6 Dec 03 17:58:50 crc kubenswrapper[4687]: E1203 17:58:50.931766 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:42758->38.102.83.130:36177: write tcp 38.102.83.130:42758->38.102.83.130:36177: write: broken pipe Dec 03 17:58:50 crc kubenswrapper[4687]: I1203 17:58:50.969503 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2fnzt"] Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.077027 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-111f-account-create-update-mqbbx"] Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.100293 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fl77x"] Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.658997 4687 generic.go:334] "Generic (PLEG): container finished" podID="0b39c370-c8bc-4811-a7d3-75e3dd59450c" containerID="4a73d9726774ae32f7bfa173c0dd77c716548c81a9707a19dc5f4ba5b0ab16d4" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.659042 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g957f" event={"ID":"0b39c370-c8bc-4811-a7d3-75e3dd59450c","Type":"ContainerDied","Data":"4a73d9726774ae32f7bfa173c0dd77c716548c81a9707a19dc5f4ba5b0ab16d4"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.659407 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g957f" event={"ID":"0b39c370-c8bc-4811-a7d3-75e3dd59450c","Type":"ContainerStarted","Data":"d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.661423 4687 generic.go:334] "Generic (PLEG): container finished" podID="d935fb22-7243-4c51-a92c-59e917358f4e" containerID="9cbca4b6569ad41bd32ceecc7c6d2f12a0864cba2af434b62e8d42386e06c1c1" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.661475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-111f-account-create-update-mqbbx" event={"ID":"d935fb22-7243-4c51-a92c-59e917358f4e","Type":"ContainerDied","Data":"9cbca4b6569ad41bd32ceecc7c6d2f12a0864cba2af434b62e8d42386e06c1c1"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.661491 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-111f-account-create-update-mqbbx" event={"ID":"d935fb22-7243-4c51-a92c-59e917358f4e","Type":"ContainerStarted","Data":"b2e593978190d61f92c604932d25928a50fd618b1acd20f03c5c480e2e681f17"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.662869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fl77x" event={"ID":"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c","Type":"ContainerStarted","Data":"dfc5695c7d42537c46ce4bd82c45d417a0d4701191ebda67aef66012939bb385"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.664542 4687 generic.go:334] "Generic (PLEG): container finished" podID="1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" containerID="a2e5afcb517975022f4345953179db1af79df4b766a88d8243433f9c08b555b0" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.664597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zsnhj" event={"ID":"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9","Type":"ContainerDied","Data":"a2e5afcb517975022f4345953179db1af79df4b766a88d8243433f9c08b555b0"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.666411 4687 generic.go:334] "Generic (PLEG): container finished" podID="fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" containerID="95032731d7b609c8404448f438c16eb16e2b95f9287b95df81b3891876a756c3" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.666462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3681-account-create-update-8hzv5" event={"ID":"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9","Type":"ContainerDied","Data":"95032731d7b609c8404448f438c16eb16e2b95f9287b95df81b3891876a756c3"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.668411 4687 generic.go:334] "Generic (PLEG): container finished" podID="b19b1f86-c351-48d8-b165-177ff9d25d76" containerID="1695b7e85825ae74de33fb3ae91d389955636cb5993474e014bb39a4e26608ad" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.668460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c684-account-create-update-2v9t5" event={"ID":"b19b1f86-c351-48d8-b165-177ff9d25d76","Type":"ContainerDied","Data":"1695b7e85825ae74de33fb3ae91d389955636cb5993474e014bb39a4e26608ad"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.668478 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c684-account-create-update-2v9t5" event={"ID":"b19b1f86-c351-48d8-b165-177ff9d25d76","Type":"ContainerStarted","Data":"72e6b07ca2d7dd394361e87f3a086fde0272e0140df3aa7e92dd7d68bd420eee"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.670347 4687 generic.go:334] "Generic (PLEG): container finished" podID="d3667af9-5425-4ca3-b700-48fdc547de52" containerID="d32935b35304743f9d2b35aade9836515985de1aee239a4e4ee64a0268f30f41" exitCode=0 Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.670376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2fnzt" event={"ID":"d3667af9-5425-4ca3-b700-48fdc547de52","Type":"ContainerDied","Data":"d32935b35304743f9d2b35aade9836515985de1aee239a4e4ee64a0268f30f41"} Dec 03 17:58:51 crc kubenswrapper[4687]: I1203 17:58:51.670395 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2fnzt" event={"ID":"d3667af9-5425-4ca3-b700-48fdc547de52","Type":"ContainerStarted","Data":"6f7fc003ce1791984e3a1581e7a093fe4f8ce66566a378402c070bf74484414e"} Dec 03 17:58:52 crc kubenswrapper[4687]: I1203 17:58:52.987916 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.079302 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqvjr\" (UniqueName: \"kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr\") pod \"d935fb22-7243-4c51-a92c-59e917358f4e\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.079512 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts\") pod \"d935fb22-7243-4c51-a92c-59e917358f4e\" (UID: \"d935fb22-7243-4c51-a92c-59e917358f4e\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.080556 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d935fb22-7243-4c51-a92c-59e917358f4e" (UID: "d935fb22-7243-4c51-a92c-59e917358f4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.096667 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr" (OuterVolumeSpecName: "kube-api-access-qqvjr") pod "d935fb22-7243-4c51-a92c-59e917358f4e" (UID: "d935fb22-7243-4c51-a92c-59e917358f4e"). InnerVolumeSpecName "kube-api-access-qqvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.181158 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d935fb22-7243-4c51-a92c-59e917358f4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.181198 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqvjr\" (UniqueName: \"kubernetes.io/projected/d935fb22-7243-4c51-a92c-59e917358f4e-kube-api-access-qqvjr\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.282273 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.351688 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.357691 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.358059 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-t5dt9" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="dnsmasq-dns" containerID="cri-o://6208415cd4beae3b8ac7537acd5001d823e63987cefe1a508d2696b585e4f205" gracePeriod=10 Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.364002 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.381030 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.403765 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g957f" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.442472 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485223 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mwnb\" (UniqueName: \"kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb\") pod \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485261 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts\") pod \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\" (UID: \"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fwg4\" (UniqueName: \"kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4\") pod \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485362 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cf4w\" (UniqueName: \"kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w\") pod \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485402 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xqd\" (UniqueName: \"kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd\") pod \"b19b1f86-c351-48d8-b165-177ff9d25d76\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485435 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts\") pod \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\" (UID: \"0b39c370-c8bc-4811-a7d3-75e3dd59450c\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485473 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts\") pod \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\" (UID: \"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485499 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts\") pod \"b19b1f86-c351-48d8-b165-177ff9d25d76\" (UID: \"b19b1f86-c351-48d8-b165-177ff9d25d76\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.485900 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b39c370-c8bc-4811-a7d3-75e3dd59450c" (UID: "0b39c370-c8bc-4811-a7d3-75e3dd59450c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.486145 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" (UID: "fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.486228 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b19b1f86-c351-48d8-b165-177ff9d25d76" (UID: "b19b1f86-c351-48d8-b165-177ff9d25d76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.486879 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" (UID: "1ae1bbd2-1eaf-4869-b833-8ca42a487ba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.490511 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd" (OuterVolumeSpecName: "kube-api-access-w7xqd") pod "b19b1f86-c351-48d8-b165-177ff9d25d76" (UID: "b19b1f86-c351-48d8-b165-177ff9d25d76"). InnerVolumeSpecName "kube-api-access-w7xqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.491265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb" (OuterVolumeSpecName: "kube-api-access-5mwnb") pod "1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" (UID: "1ae1bbd2-1eaf-4869-b833-8ca42a487ba9"). InnerVolumeSpecName "kube-api-access-5mwnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.492246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w" (OuterVolumeSpecName: "kube-api-access-5cf4w") pod "0b39c370-c8bc-4811-a7d3-75e3dd59450c" (UID: "0b39c370-c8bc-4811-a7d3-75e3dd59450c"). InnerVolumeSpecName "kube-api-access-5cf4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.492290 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4" (OuterVolumeSpecName: "kube-api-access-6fwg4") pod "fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" (UID: "fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9"). InnerVolumeSpecName "kube-api-access-6fwg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.587948 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts\") pod \"d3667af9-5425-4ca3-b700-48fdc547de52\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588060 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtwhp\" (UniqueName: \"kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp\") pod \"d3667af9-5425-4ca3-b700-48fdc547de52\" (UID: \"d3667af9-5425-4ca3-b700-48fdc547de52\") " Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588579 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xqd\" (UniqueName: \"kubernetes.io/projected/b19b1f86-c351-48d8-b165-177ff9d25d76-kube-api-access-w7xqd\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588613 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b39c370-c8bc-4811-a7d3-75e3dd59450c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588623 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588634 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b19b1f86-c351-48d8-b165-177ff9d25d76-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588644 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mwnb\" (UniqueName: \"kubernetes.io/projected/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-kube-api-access-5mwnb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588653 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588661 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fwg4\" (UniqueName: \"kubernetes.io/projected/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9-kube-api-access-6fwg4\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.588670 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cf4w\" (UniqueName: \"kubernetes.io/projected/0b39c370-c8bc-4811-a7d3-75e3dd59450c-kube-api-access-5cf4w\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.589236 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3667af9-5425-4ca3-b700-48fdc547de52" (UID: "d3667af9-5425-4ca3-b700-48fdc547de52"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.593648 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp" (OuterVolumeSpecName: "kube-api-access-mtwhp") pod "d3667af9-5425-4ca3-b700-48fdc547de52" (UID: "d3667af9-5425-4ca3-b700-48fdc547de52"). InnerVolumeSpecName "kube-api-access-mtwhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.689813 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3667af9-5425-4ca3-b700-48fdc547de52-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.690169 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtwhp\" (UniqueName: \"kubernetes.io/projected/d3667af9-5425-4ca3-b700-48fdc547de52-kube-api-access-mtwhp\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.694427 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3681-account-create-update-8hzv5" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.694438 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3681-account-create-update-8hzv5" event={"ID":"fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9","Type":"ContainerDied","Data":"20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.694552 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f30455f8c1ddda33e1718ffbf1049d1079781381a261d19616bf4d24ad00f7" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.695994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c684-account-create-update-2v9t5" event={"ID":"b19b1f86-c351-48d8-b165-177ff9d25d76","Type":"ContainerDied","Data":"72e6b07ca2d7dd394361e87f3a086fde0272e0140df3aa7e92dd7d68bd420eee"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.696027 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72e6b07ca2d7dd394361e87f3a086fde0272e0140df3aa7e92dd7d68bd420eee" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.696082 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c684-account-create-update-2v9t5" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.698241 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2fnzt" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.698247 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2fnzt" event={"ID":"d3667af9-5425-4ca3-b700-48fdc547de52","Type":"ContainerDied","Data":"6f7fc003ce1791984e3a1581e7a093fe4f8ce66566a378402c070bf74484414e"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.698274 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7fc003ce1791984e3a1581e7a093fe4f8ce66566a378402c070bf74484414e" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.701531 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g957f" event={"ID":"0b39c370-c8bc-4811-a7d3-75e3dd59450c","Type":"ContainerDied","Data":"d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.701552 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e0ee345246a6e058ddbae7d9e9038aa4c50cbc36b5e9a3fb48b4846caa0bd6" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.701599 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g957f" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.707141 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-111f-account-create-update-mqbbx" event={"ID":"d935fb22-7243-4c51-a92c-59e917358f4e","Type":"ContainerDied","Data":"b2e593978190d61f92c604932d25928a50fd618b1acd20f03c5c480e2e681f17"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.707241 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2e593978190d61f92c604932d25928a50fd618b1acd20f03c5c480e2e681f17" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.707172 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-111f-account-create-update-mqbbx" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.717149 4687 generic.go:334] "Generic (PLEG): container finished" podID="a3522527-5e9c-4148-89ea-890feca4df8b" containerID="6208415cd4beae3b8ac7537acd5001d823e63987cefe1a508d2696b585e4f205" exitCode=0 Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.717428 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t5dt9" event={"ID":"a3522527-5e9c-4148-89ea-890feca4df8b","Type":"ContainerDied","Data":"6208415cd4beae3b8ac7537acd5001d823e63987cefe1a508d2696b585e4f205"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.726656 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zsnhj" Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.726571 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zsnhj" event={"ID":"1ae1bbd2-1eaf-4869-b833-8ca42a487ba9","Type":"ContainerDied","Data":"7d71fc59a3856212feacda1d0b90e5833834be17cc472d17050ff611f68322c2"} Dec 03 17:58:53 crc kubenswrapper[4687]: I1203 17:58:53.728375 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d71fc59a3856212feacda1d0b90e5833834be17cc472d17050ff611f68322c2" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.406303 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.537735 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb\") pod \"a3522527-5e9c-4148-89ea-890feca4df8b\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.538108 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5wgs\" (UniqueName: \"kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs\") pod \"a3522527-5e9c-4148-89ea-890feca4df8b\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.538778 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config\") pod \"a3522527-5e9c-4148-89ea-890feca4df8b\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.538826 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb\") pod \"a3522527-5e9c-4148-89ea-890feca4df8b\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.538855 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc\") pod \"a3522527-5e9c-4148-89ea-890feca4df8b\" (UID: \"a3522527-5e9c-4148-89ea-890feca4df8b\") " Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.542669 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs" (OuterVolumeSpecName: "kube-api-access-t5wgs") pod "a3522527-5e9c-4148-89ea-890feca4df8b" (UID: "a3522527-5e9c-4148-89ea-890feca4df8b"). InnerVolumeSpecName "kube-api-access-t5wgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.581248 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config" (OuterVolumeSpecName: "config") pod "a3522527-5e9c-4148-89ea-890feca4df8b" (UID: "a3522527-5e9c-4148-89ea-890feca4df8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.590383 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a3522527-5e9c-4148-89ea-890feca4df8b" (UID: "a3522527-5e9c-4148-89ea-890feca4df8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.592939 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a3522527-5e9c-4148-89ea-890feca4df8b" (UID: "a3522527-5e9c-4148-89ea-890feca4df8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.606168 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a3522527-5e9c-4148-89ea-890feca4df8b" (UID: "a3522527-5e9c-4148-89ea-890feca4df8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.640972 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.641012 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5wgs\" (UniqueName: \"kubernetes.io/projected/a3522527-5e9c-4148-89ea-890feca4df8b-kube-api-access-t5wgs\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.641034 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.641051 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.641067 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a3522527-5e9c-4148-89ea-890feca4df8b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.762147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-t5dt9" event={"ID":"a3522527-5e9c-4148-89ea-890feca4df8b","Type":"ContainerDied","Data":"6e7368619a3c403923acac3cf2fe06eac18a70816cec77e66dd7478b19f4ea1f"} Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.762211 4687 scope.go:117] "RemoveContainer" containerID="6208415cd4beae3b8ac7537acd5001d823e63987cefe1a508d2696b585e4f205" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.762170 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-t5dt9" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.764338 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fl77x" event={"ID":"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c","Type":"ContainerStarted","Data":"106e9bb9cdb9821d16e06ec4496198c7af1cafae68082b7984488bd0dcfa0d9c"} Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.792365 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fl77x" podStartSLOduration=2.516652214 podStartE2EDuration="7.792345865s" podCreationTimestamp="2025-12-03 17:58:49 +0000 UTC" firstStartedPulling="2025-12-03 17:58:51.124343109 +0000 UTC m=+1164.015038542" lastFinishedPulling="2025-12-03 17:58:56.40003676 +0000 UTC m=+1169.290732193" observedRunningTime="2025-12-03 17:58:56.785519051 +0000 UTC m=+1169.676214494" watchObservedRunningTime="2025-12-03 17:58:56.792345865 +0000 UTC m=+1169.683041308" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.814086 4687 scope.go:117] "RemoveContainer" containerID="5c5812d5efeb6ca0a75782a26009d999ca30f00dfaf874da56275e451bf0a944" Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.817585 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:56 crc kubenswrapper[4687]: I1203 17:58:56.829608 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-t5dt9"] Dec 03 17:58:57 crc kubenswrapper[4687]: I1203 17:58:57.420078 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" path="/var/lib/kubelet/pods/a3522527-5e9c-4148-89ea-890feca4df8b/volumes" Dec 03 17:59:00 crc kubenswrapper[4687]: I1203 17:59:00.189035 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-t5dt9" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Dec 03 17:59:02 crc kubenswrapper[4687]: I1203 17:59:02.819011 4687 generic.go:334] "Generic (PLEG): container finished" podID="d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" containerID="106e9bb9cdb9821d16e06ec4496198c7af1cafae68082b7984488bd0dcfa0d9c" exitCode=0 Dec 03 17:59:02 crc kubenswrapper[4687]: I1203 17:59:02.819147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fl77x" event={"ID":"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c","Type":"ContainerDied","Data":"106e9bb9cdb9821d16e06ec4496198c7af1cafae68082b7984488bd0dcfa0d9c"} Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.594230 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fl77x" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.675004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnszd\" (UniqueName: \"kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd\") pod \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.675089 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle\") pod \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.675227 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data\") pod \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\" (UID: \"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c\") " Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.680340 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd" (OuterVolumeSpecName: "kube-api-access-tnszd") pod "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" (UID: "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c"). InnerVolumeSpecName "kube-api-access-tnszd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.700358 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" (UID: "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.717857 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data" (OuterVolumeSpecName: "config-data") pod "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" (UID: "d35a8832-c4a9-4d5a-8612-d870bcf6fa4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.777273 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnszd\" (UniqueName: \"kubernetes.io/projected/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-kube-api-access-tnszd\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.777310 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.777319 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.840216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fl77x" event={"ID":"d35a8832-c4a9-4d5a-8612-d870bcf6fa4c","Type":"ContainerDied","Data":"dfc5695c7d42537c46ce4bd82c45d417a0d4701191ebda67aef66012939bb385"} Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.840264 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc5695c7d42537c46ce4bd82c45d417a0d4701191ebda67aef66012939bb385" Dec 03 17:59:04 crc kubenswrapper[4687]: I1203 17:59:04.840318 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fl77x" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.118849 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119598 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" containerName="keystone-db-sync" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119621 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" containerName="keystone-db-sync" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119639 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19b1f86-c351-48d8-b165-177ff9d25d76" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119647 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19b1f86-c351-48d8-b165-177ff9d25d76" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119665 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119672 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119688 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3667af9-5425-4ca3-b700-48fdc547de52" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119697 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3667af9-5425-4ca3-b700-48fdc547de52" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119722 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119731 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119758 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d935fb22-7243-4c51-a92c-59e917358f4e" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119766 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d935fb22-7243-4c51-a92c-59e917358f4e" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119780 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="init" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119787 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="init" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119797 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="dnsmasq-dns" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119805 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="dnsmasq-dns" Dec 03 17:59:05 crc kubenswrapper[4687]: E1203 17:59:05.119825 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b39c370-c8bc-4811-a7d3-75e3dd59450c" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.119833 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b39c370-c8bc-4811-a7d3-75e3dd59450c" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120017 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3667af9-5425-4ca3-b700-48fdc547de52" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120033 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b39c370-c8bc-4811-a7d3-75e3dd59450c" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120049 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" containerName="keystone-db-sync" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120065 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19b1f86-c351-48d8-b165-177ff9d25d76" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120077 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3522527-5e9c-4148-89ea-890feca4df8b" containerName="dnsmasq-dns" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120084 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" containerName="mariadb-database-create" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120096 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.120104 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d935fb22-7243-4c51-a92c-59e917358f4e" containerName="mariadb-account-create-update" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.121091 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.144102 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r572g"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.145063 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.150536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.151063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.151371 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.153105 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.158816 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ch9hz" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.164495 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.183833 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r572g"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.285860 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.285933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.285956 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286033 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286051 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286072 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlwr\" (UniqueName: \"kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6qx\" (UniqueName: \"kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286219 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.286390 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.355372 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.357344 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.376269 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.387558 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.387867 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.388460 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-479fj" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.388735 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.389830 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hsnj2"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.391414 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406723 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406751 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406816 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406968 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlwr\" (UniqueName: \"kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.406995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6qx\" (UniqueName: \"kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.408097 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.408633 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.424764 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hsnj2"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.424849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.425827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.429955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.435101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.435650 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.436226 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.440321 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.440633 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.440811 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.440850 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4cs5r" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.440982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.480604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6qx\" (UniqueName: \"kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx\") pod \"keystone-bootstrap-r572g\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.513572 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.513670 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.513696 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.513760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.513871 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.514039 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgw5\" (UniqueName: \"kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.514082 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.514186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cktk5\" (UniqueName: \"kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.538654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlwr\" (UniqueName: \"kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr\") pod \"dnsmasq-dns-847c4cc679-sdzg5\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.553281 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.555187 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.560554 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.561056 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616770 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616837 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616876 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616941 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616967 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmgw5\" (UniqueName: \"kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.616983 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.617016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cktk5\" (UniqueName: \"kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.618072 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.622099 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.623837 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.630823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.634652 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.634853 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.667055 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cktk5\" (UniqueName: \"kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5\") pod \"horizon-b9f7ddcd5-q82d2\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.667353 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.679870 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmgw5\" (UniqueName: \"kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5\") pod \"neutron-db-sync-hsnj2\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.680501 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.719887 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.719935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.719969 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.719998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2bxm\" (UniqueName: \"kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.720023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.720042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.720094 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.738959 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2flgf"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.740176 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.741577 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.744586 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.748665 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.748944 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sktmm" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.752065 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2flgf"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.763656 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.823997 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825011 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825078 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2bxm\" (UniqueName: \"kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825167 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825246 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825312 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.825343 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.831743 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.832482 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.833579 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.848649 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.853534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.855880 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.867190 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-schhv"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.868268 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.876559 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.877035 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.885948 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qd5t9" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.906386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2bxm\" (UniqueName: \"kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm\") pod \"ceilometer-0\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " pod="openstack/ceilometer-0" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.927838 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-schhv"] Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.929238 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.929273 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.929365 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsh9p\" (UniqueName: \"kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:05 crc kubenswrapper[4687]: I1203 17:59:05.929509 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031475 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4srk\" (UniqueName: \"kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031533 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsh9p\" (UniqueName: \"kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031595 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031614 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031638 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031672 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.031750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.036193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.048843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.067022 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.068775 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.072284 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsh9p\" (UniqueName: \"kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p\") pod \"barbican-db-sync-2flgf\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.086236 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.087524 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.102544 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m67k4"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.105752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.110220 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.110394 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-46dxr" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.110513 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.111188 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.123076 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133030 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133089 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133136 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133163 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133222 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133245 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133266 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133280 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9hc\" (UniqueName: \"kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.133334 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4srk\" (UniqueName: \"kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.137339 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m67k4"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.137442 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.139843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.139883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.143528 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.146004 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.153179 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.160246 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4srk\" (UniqueName: \"kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk\") pod \"cinder-db-sync-schhv\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.182638 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.184161 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.193771 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.198201 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hh25f" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.199393 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.200928 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.201623 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.228255 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.238925 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.238984 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239009 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239093 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239193 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239250 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239317 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239340 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9hc\" (UniqueName: \"kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239362 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239396 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239490 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-647ct\" (UniqueName: \"kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239544 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239603 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239655 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.239692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whtjl\" (UniqueName: \"kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.243725 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.244533 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.245411 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.248160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.251969 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.255581 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.261753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.271864 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.287181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9hc\" (UniqueName: \"kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc\") pod \"horizon-9ddff4dd7-zxsgk\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.293449 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-schhv" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341395 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341690 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341715 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341794 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341847 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341872 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341898 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341924 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341944 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.341967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342006 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342027 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-647ct\" (UniqueName: \"kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342187 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342257 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvjpq\" (UniqueName: \"kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whtjl\" (UniqueName: \"kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342374 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd9tg\" (UniqueName: \"kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.342393 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.344448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.344448 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.344790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.345063 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.345113 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.346388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.348908 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.350856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.353325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.370884 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whtjl\" (UniqueName: \"kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl\") pod \"dnsmasq-dns-785d8bcb8c-bqs26\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.379876 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-647ct\" (UniqueName: \"kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct\") pod \"placement-db-sync-m67k4\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.417736 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.431564 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445166 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvjpq\" (UniqueName: \"kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445186 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445203 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd9tg\" (UniqueName: \"kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445256 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445286 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445349 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445365 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445393 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445409 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445433 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.445449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.446227 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.446469 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.446684 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.446915 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.448364 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.448539 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.448983 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.456200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.469639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.473987 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.474194 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.474340 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.474525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.475234 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.476875 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.487775 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvjpq\" (UniqueName: \"kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.487928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.491445 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.492673 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd9tg\" (UniqueName: \"kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg\") pod \"glance-default-external-api-0\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.554571 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.570460 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.603328 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.817675 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hsnj2"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.834439 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.906728 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" event={"ID":"bc580093-1ee1-4b3d-b8c6-700bf15b5330","Type":"ContainerStarted","Data":"fe828dff755e158490a51046a4f47e4deffe775e9d345824765b0d1decc01e64"} Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.909783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hsnj2" event={"ID":"80c9985b-e915-4819-8355-af9e8076f50a","Type":"ContainerStarted","Data":"ceab3b0f690e9ffefb24c989a53980baaba9e0bc5aa1e141ae50aeba8ca9fda8"} Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.911017 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b9f7ddcd5-q82d2" event={"ID":"ac2ac2e6-024d-46dd-80c1-92472cf6e116","Type":"ContainerStarted","Data":"434eb40e38d9fc287dd4c903c60442580a4e29e76969956493b5d4804d5992c5"} Dec 03 17:59:06 crc kubenswrapper[4687]: I1203 17:59:06.926037 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.099982 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r572g"] Dec 03 17:59:07 crc kubenswrapper[4687]: W1203 17:59:07.106729 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c4d0550_ee2f_49a6_94a9_f00c1b922a94.slice/crio-17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3 WatchSource:0}: Error finding container 17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3: Status 404 returned error can't find the container with id 17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3 Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.229386 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m67k4"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.241010 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.247757 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2flgf"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.258707 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-schhv"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.476855 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.622296 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.701187 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.749079 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.751674 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.778476 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.783154 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.814524 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.849821 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.880883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.883574 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.883646 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.883938 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.884733 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttww\" (UniqueName: \"kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.952714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2flgf" event={"ID":"f34993b1-3135-46ef-9f85-9ab7525b1682","Type":"ContainerStarted","Data":"594189f8e4b7511f56681ffec35e83a4e1e3d570622ce92f366bcd6138ef82a1"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.956247 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc580093-1ee1-4b3d-b8c6-700bf15b5330" containerID="dec6fc8839b2692b8c33f7c771601fe4bca470b4962250af0d2127ef8e818a96" exitCode=0 Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.956340 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" event={"ID":"bc580093-1ee1-4b3d-b8c6-700bf15b5330","Type":"ContainerDied","Data":"dec6fc8839b2692b8c33f7c771601fe4bca470b4962250af0d2127ef8e818a96"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.960395 4687 generic.go:334] "Generic (PLEG): container finished" podID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerID="6df47922de8f88d0d3cff5b6f18be9a76228f99fe03afd13283baa900f134811" exitCode=0 Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.960457 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" event={"ID":"1cae5f8b-6d0e-4f66-867a-7d7288528ce4","Type":"ContainerDied","Data":"6df47922de8f88d0d3cff5b6f18be9a76228f99fe03afd13283baa900f134811"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.960483 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" event={"ID":"1cae5f8b-6d0e-4f66-867a-7d7288528ce4","Type":"ContainerStarted","Data":"59a32ed139506fc1d0e7a47a4202fde515fc39137ac32a1272c26c7760fb4068"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.964322 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m67k4" event={"ID":"a59cd24e-e105-48b8-a084-909b0dca97c0","Type":"ContainerStarted","Data":"056e8c944fc4ecb0bbee8e2995299c9caa860df7767db90c6b53c52caa8122ad"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.971331 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerStarted","Data":"9ab4b9d00c16d9ee90bfaa3301579c0c125081a3456d4c101fd666d48487e23f"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.977711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-schhv" event={"ID":"67159b4a-2e66-424e-9e93-4863da0f5b56","Type":"ContainerStarted","Data":"783d24135ca63fecab3da39dfe07599048f106bfad74d4f4f632831923796b67"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.981164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hsnj2" event={"ID":"80c9985b-e915-4819-8355-af9e8076f50a","Type":"ContainerStarted","Data":"6c7e1a12bcc10974c23a95be6f4db6fb13ffb0c1b4ff38d9fc0fa8a06790252b"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.986798 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.986843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.986871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.986974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.986994 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttww\" (UniqueName: \"kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.988375 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerStarted","Data":"21bd6d9b89e665e210f87c59ccf19e982f1bc971ff870cd9f7a3c9f53d548633"} Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.989906 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.991588 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:07 crc kubenswrapper[4687]: I1203 17:59:07.994824 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.000980 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r572g" event={"ID":"7c4d0550-ee2f-49a6-94a9-f00c1b922a94","Type":"ContainerStarted","Data":"1249deec1eb1764e6a6e1535920ec9a99f9cb675e451ca77274c35242f90287a"} Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.001018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r572g" event={"ID":"7c4d0550-ee2f-49a6-94a9-f00c1b922a94","Type":"ContainerStarted","Data":"17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3"} Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.014809 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.020180 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttww\" (UniqueName: \"kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww\") pod \"horizon-6596889657-28n98\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.026158 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hsnj2" podStartSLOduration=3.026140627 podStartE2EDuration="3.026140627s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:08.01441658 +0000 UTC m=+1180.905112013" watchObservedRunningTime="2025-12-03 17:59:08.026140627 +0000 UTC m=+1180.916836070" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.036570 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9ddff4dd7-zxsgk" event={"ID":"84c51663-b6c7-4d14-9990-105bf776f49c","Type":"ContainerStarted","Data":"37964e082bde755d1209a386886fc5fe3d354996fc378ae5df270c38c2618eda"} Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.037502 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r572g" podStartSLOduration=3.037491294 podStartE2EDuration="3.037491294s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:08.035341955 +0000 UTC m=+1180.926037388" watchObservedRunningTime="2025-12-03 17:59:08.037491294 +0000 UTC m=+1180.928186727" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.131820 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.335354 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.393702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjlwr\" (UniqueName: \"kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.394095 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.394346 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.394378 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.394409 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.394505 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0\") pod \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\" (UID: \"bc580093-1ee1-4b3d-b8c6-700bf15b5330\") " Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.431150 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.431850 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr" (OuterVolumeSpecName: "kube-api-access-zjlwr") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "kube-api-access-zjlwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.497312 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjlwr\" (UniqueName: \"kubernetes.io/projected/bc580093-1ee1-4b3d-b8c6-700bf15b5330-kube-api-access-zjlwr\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.629907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.629974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.639663 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.647678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.668654 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config" (OuterVolumeSpecName: "config") pod "bc580093-1ee1-4b3d-b8c6-700bf15b5330" (UID: "bc580093-1ee1-4b3d-b8c6-700bf15b5330"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.703680 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.704022 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.704034 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.704045 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.704056 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc580093-1ee1-4b3d-b8c6-700bf15b5330-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:08 crc kubenswrapper[4687]: I1203 17:59:08.741946 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:08 crc kubenswrapper[4687]: W1203 17:59:08.760011 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b07a487_1a41_44de_ac80_4a8f2d26483a.slice/crio-c5077cad09a01352e9fab1ec600a04d84dd0ef4a3183638e6522adcb608855bf WatchSource:0}: Error finding container c5077cad09a01352e9fab1ec600a04d84dd0ef4a3183638e6522adcb608855bf: Status 404 returned error can't find the container with id c5077cad09a01352e9fab1ec600a04d84dd0ef4a3183638e6522adcb608855bf Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.075821 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" event={"ID":"1cae5f8b-6d0e-4f66-867a-7d7288528ce4","Type":"ContainerStarted","Data":"b88f7a8660a7ad950ee492fd13e324a083575b5e87405bacaf0a1829f2d97bba"} Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.076202 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.080059 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerStarted","Data":"63678f0143317a0e31f3351391d9b75226f3060b094983962258111a47b69d6c"} Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.093875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerStarted","Data":"c3582a4d620e2e933da6ed2868264d16e1fa97dc909e24f9cca82df1e22c246e"} Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.103879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6596889657-28n98" event={"ID":"2b07a487-1a41-44de-ac80-4a8f2d26483a","Type":"ContainerStarted","Data":"c5077cad09a01352e9fab1ec600a04d84dd0ef4a3183638e6522adcb608855bf"} Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.110231 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.110250 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" podStartSLOduration=4.110222813 podStartE2EDuration="4.110222813s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:09.099832721 +0000 UTC m=+1181.990528164" watchObservedRunningTime="2025-12-03 17:59:09.110222813 +0000 UTC m=+1182.000918246" Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.110215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-sdzg5" event={"ID":"bc580093-1ee1-4b3d-b8c6-700bf15b5330","Type":"ContainerDied","Data":"fe828dff755e158490a51046a4f47e4deffe775e9d345824765b0d1decc01e64"} Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.110361 4687 scope.go:117] "RemoveContainer" containerID="dec6fc8839b2692b8c33f7c771601fe4bca470b4962250af0d2127ef8e818a96" Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.219921 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.242688 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-sdzg5"] Dec 03 17:59:09 crc kubenswrapper[4687]: I1203 17:59:09.559696 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc580093-1ee1-4b3d-b8c6-700bf15b5330" path="/var/lib/kubelet/pods/bc580093-1ee1-4b3d-b8c6-700bf15b5330/volumes" Dec 03 17:59:11 crc kubenswrapper[4687]: I1203 17:59:11.143477 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerStarted","Data":"bdb38df354603f3304556c9524dd7477bed3f3f59b81c5f5338e9f3fb5e092c4"} Dec 03 17:59:11 crc kubenswrapper[4687]: I1203 17:59:11.144238 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-log" containerID="cri-o://c3582a4d620e2e933da6ed2868264d16e1fa97dc909e24f9cca82df1e22c246e" gracePeriod=30 Dec 03 17:59:11 crc kubenswrapper[4687]: I1203 17:59:11.144642 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-httpd" containerID="cri-o://bdb38df354603f3304556c9524dd7477bed3f3f59b81c5f5338e9f3fb5e092c4" gracePeriod=30 Dec 03 17:59:11 crc kubenswrapper[4687]: I1203 17:59:11.152799 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerStarted","Data":"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7"} Dec 03 17:59:11 crc kubenswrapper[4687]: I1203 17:59:11.173686 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.173663343 podStartE2EDuration="6.173663343s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:11.163949281 +0000 UTC m=+1184.054644724" watchObservedRunningTime="2025-12-03 17:59:11.173663343 +0000 UTC m=+1184.064358776" Dec 03 17:59:12 crc kubenswrapper[4687]: I1203 17:59:12.170586 4687 generic.go:334] "Generic (PLEG): container finished" podID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerID="bdb38df354603f3304556c9524dd7477bed3f3f59b81c5f5338e9f3fb5e092c4" exitCode=0 Dec 03 17:59:12 crc kubenswrapper[4687]: I1203 17:59:12.170849 4687 generic.go:334] "Generic (PLEG): container finished" podID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerID="c3582a4d620e2e933da6ed2868264d16e1fa97dc909e24f9cca82df1e22c246e" exitCode=143 Dec 03 17:59:12 crc kubenswrapper[4687]: I1203 17:59:12.170771 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerDied","Data":"bdb38df354603f3304556c9524dd7477bed3f3f59b81c5f5338e9f3fb5e092c4"} Dec 03 17:59:12 crc kubenswrapper[4687]: I1203 17:59:12.170899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerDied","Data":"c3582a4d620e2e933da6ed2868264d16e1fa97dc909e24f9cca82df1e22c246e"} Dec 03 17:59:13 crc kubenswrapper[4687]: I1203 17:59:13.185112 4687 generic.go:334] "Generic (PLEG): container finished" podID="7c4d0550-ee2f-49a6-94a9-f00c1b922a94" containerID="1249deec1eb1764e6a6e1535920ec9a99f9cb675e451ca77274c35242f90287a" exitCode=0 Dec 03 17:59:13 crc kubenswrapper[4687]: I1203 17:59:13.185329 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r572g" event={"ID":"7c4d0550-ee2f-49a6-94a9-f00c1b922a94","Type":"ContainerDied","Data":"1249deec1eb1764e6a6e1535920ec9a99f9cb675e451ca77274c35242f90287a"} Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.013352 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.053728 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 17:59:14 crc kubenswrapper[4687]: E1203 17:59:14.054061 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc580093-1ee1-4b3d-b8c6-700bf15b5330" containerName="init" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.054079 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc580093-1ee1-4b3d-b8c6-700bf15b5330" containerName="init" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.054290 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc580093-1ee1-4b3d-b8c6-700bf15b5330" containerName="init" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.055170 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.061625 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.069402 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.108735 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.147189 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6968cc7b7b-57qh6"] Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.148835 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.165400 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6968cc7b7b-57qh6"] Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214169 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214253 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214331 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214353 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214603 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkrfc\" (UniqueName: \"kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.214626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336089 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkrfc\" (UniqueName: \"kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336188 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-scripts\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08dc684-ab9f-41db-a259-2d06b757f3cf-logs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336310 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336339 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-combined-ca-bundle\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336368 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-tls-certs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336398 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-config-data\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4mc8\" (UniqueName: \"kubernetes.io/projected/b08dc684-ab9f-41db-a259-2d06b757f3cf-kube-api-access-s4mc8\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336486 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336506 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-secret-key\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.336560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.337059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.347550 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.348898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.355523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.357079 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.357476 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.358083 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkrfc\" (UniqueName: \"kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc\") pod \"horizon-58975c669d-5qj7w\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.394717 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4mc8\" (UniqueName: \"kubernetes.io/projected/b08dc684-ab9f-41db-a259-2d06b757f3cf-kube-api-access-s4mc8\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-secret-key\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-scripts\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08dc684-ab9f-41db-a259-2d06b757f3cf-logs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437880 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-combined-ca-bundle\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437911 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-tls-certs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.437937 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-config-data\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.438231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b08dc684-ab9f-41db-a259-2d06b757f3cf-logs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.438885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-scripts\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.439511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b08dc684-ab9f-41db-a259-2d06b757f3cf-config-data\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.441695 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-combined-ca-bundle\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.442362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-tls-certs\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.449796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b08dc684-ab9f-41db-a259-2d06b757f3cf-horizon-secret-key\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.453619 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4mc8\" (UniqueName: \"kubernetes.io/projected/b08dc684-ab9f-41db-a259-2d06b757f3cf-kube-api-access-s4mc8\") pod \"horizon-6968cc7b7b-57qh6\" (UID: \"b08dc684-ab9f-41db-a259-2d06b757f3cf\") " pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:14 crc kubenswrapper[4687]: I1203 17:59:14.484238 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:16 crc kubenswrapper[4687]: I1203 17:59:16.433365 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:16 crc kubenswrapper[4687]: I1203 17:59:16.493938 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:59:16 crc kubenswrapper[4687]: I1203 17:59:16.494192 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" containerID="cri-o://3538985e90b2abd590e86e26cf646b003bd893e2467a8de7ce58eb1abaf1a7fe" gracePeriod=10 Dec 03 17:59:18 crc kubenswrapper[4687]: I1203 17:59:18.268623 4687 generic.go:334] "Generic (PLEG): container finished" podID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerID="3538985e90b2abd590e86e26cf646b003bd893e2467a8de7ce58eb1abaf1a7fe" exitCode=0 Dec 03 17:59:18 crc kubenswrapper[4687]: I1203 17:59:18.268975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" event={"ID":"4e1edd4e-b3e9-40ca-8cb1-86380336a2db","Type":"ContainerDied","Data":"3538985e90b2abd590e86e26cf646b003bd893e2467a8de7ce58eb1abaf1a7fe"} Dec 03 17:59:18 crc kubenswrapper[4687]: I1203 17:59:18.282603 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 03 17:59:23 crc kubenswrapper[4687]: I1203 17:59:23.282420 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.546229 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.546411 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nchf9h55bh54chbch67dhbfhbdhddh59dhb9h5f9h7dh58bh5bbh5d6h5cchcdh67fh5d4h59ch8fh54bh7dhd8h567h5fdh89h646h78h5bbh55bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw9hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-9ddff4dd7-zxsgk_openstack(84c51663-b6c7-4d14-9990-105bf776f49c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.554314 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-9ddff4dd7-zxsgk" podUID="84c51663-b6c7-4d14-9990-105bf776f49c" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.570501 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.570869 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ddh57hcfh66bhf6h65ch78h7fh586hbh9h5dfh54dh5cdh58fh65ch6ch7fh65bh85h67h577hch54bh8dh5dfh6ch674h79h55bh9bh77q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cktk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-b9f7ddcd5-q82d2_openstack(ac2ac2e6-024d-46dd-80c1-92472cf6e116): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.577851 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-b9f7ddcd5-q82d2" podUID="ac2ac2e6-024d-46dd-80c1-92472cf6e116" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.917258 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 03 17:59:23 crc kubenswrapper[4687]: E1203 17:59:23.917674 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bh75h6fhfh657h65dh55bh587h56dh5f4h57bh585h674h8h5c4hc9h7bh596h5fdh9bhfdh68fh67dh65hf4hc6h56dh8ch57dh5c6h578h678q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2bxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ee43441f-77ef-4fd7-a326-b173070a6060): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:23 crc kubenswrapper[4687]: I1203 17:59:23.975640 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.126984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.127164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.127191 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.128004 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.128063 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.128099 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf6qx\" (UniqueName: \"kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx\") pod \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\" (UID: \"7c4d0550-ee2f-49a6-94a9-f00c1b922a94\") " Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.133899 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.133948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.141819 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx" (OuterVolumeSpecName: "kube-api-access-nf6qx") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "kube-api-access-nf6qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.141991 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts" (OuterVolumeSpecName: "scripts") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.156199 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data" (OuterVolumeSpecName: "config-data") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.156988 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c4d0550-ee2f-49a6-94a9-f00c1b922a94" (UID: "7c4d0550-ee2f-49a6-94a9-f00c1b922a94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230681 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230727 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf6qx\" (UniqueName: \"kubernetes.io/projected/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-kube-api-access-nf6qx\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230743 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230754 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230766 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.230776 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c4d0550-ee2f-49a6-94a9-f00c1b922a94-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.329429 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r572g" event={"ID":"7c4d0550-ee2f-49a6-94a9-f00c1b922a94","Type":"ContainerDied","Data":"17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3"} Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.329483 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c4d03fd5cfa9607e8363aecfa9361666919a5ffdc8b4c2d3e250af140352d3" Dec 03 17:59:24 crc kubenswrapper[4687]: I1203 17:59:24.329493 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r572g" Dec 03 17:59:24 crc kubenswrapper[4687]: E1203 17:59:24.378201 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 03 17:59:24 crc kubenswrapper[4687]: E1203 17:59:24.379380 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nsh9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2flgf_openstack(f34993b1-3135-46ef-9f85-9ab7525b1682): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:24 crc kubenswrapper[4687]: E1203 17:59:24.381186 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2flgf" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.053059 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r572g"] Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.070755 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r572g"] Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.116412 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-6tmxl"] Dec 03 17:59:25 crc kubenswrapper[4687]: E1203 17:59:25.118747 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4d0550-ee2f-49a6-94a9-f00c1b922a94" containerName="keystone-bootstrap" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.118784 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4d0550-ee2f-49a6-94a9-f00c1b922a94" containerName="keystone-bootstrap" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.119110 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4d0550-ee2f-49a6-94a9-f00c1b922a94" containerName="keystone-bootstrap" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.120300 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.124490 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.124553 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ch9hz" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.124592 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.124635 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.124493 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.137512 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6tmxl"] Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.249681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6s2\" (UniqueName: \"kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.249793 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.249867 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.249909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.250102 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.250169 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: E1203 17:59:25.337889 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2flgf" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352140 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq6s2\" (UniqueName: \"kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352224 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.352443 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.358571 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.358950 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.359003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.359302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.360045 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.369979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq6s2\" (UniqueName: \"kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2\") pod \"keystone-bootstrap-6tmxl\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.418638 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c4d0550-ee2f-49a6-94a9-f00c1b922a94" path="/var/lib/kubelet/pods/7c4d0550-ee2f-49a6-94a9-f00c1b922a94/volumes" Dec 03 17:59:25 crc kubenswrapper[4687]: I1203 17:59:25.448849 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:26 crc kubenswrapper[4687]: I1203 17:59:26.345423 4687 generic.go:334] "Generic (PLEG): container finished" podID="80c9985b-e915-4819-8355-af9e8076f50a" containerID="6c7e1a12bcc10974c23a95be6f4db6fb13ffb0c1b4ff38d9fc0fa8a06790252b" exitCode=0 Dec 03 17:59:26 crc kubenswrapper[4687]: I1203 17:59:26.345508 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hsnj2" event={"ID":"80c9985b-e915-4819-8355-af9e8076f50a","Type":"ContainerDied","Data":"6c7e1a12bcc10974c23a95be6f4db6fb13ffb0c1b4ff38d9fc0fa8a06790252b"} Dec 03 17:59:28 crc kubenswrapper[4687]: I1203 17:59:28.281974 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 03 17:59:28 crc kubenswrapper[4687]: I1203 17:59:28.282369 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:59:31 crc kubenswrapper[4687]: I1203 17:59:31.443018 4687 scope.go:117] "RemoveContainer" containerID="67d4225f9d0ccc781b09438055a867bc8c4adab0558ed87596ce607b4d07e026" Dec 03 17:59:33 crc kubenswrapper[4687]: I1203 17:59:33.282207 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.041260 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.041461 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncch646h696h66chb4h5f4h9dh589h566h99hb9h598h646hfdhc5h7ch65fh8dh5bch59dh98h675hdbhd5hfdhb5h68dh54fhb5h7h686h549q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ttww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6596889657-28n98_openstack(2b07a487-1a41-44de-ac80-4a8f2d26483a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.045717 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6596889657-28n98" podUID="2b07a487-1a41-44de-ac80-4a8f2d26483a" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.209671 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.217939 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.238711 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.244373 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data\") pod \"84c51663-b6c7-4d14-9990-105bf776f49c\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312716 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs\") pod \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312815 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312875 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts\") pod \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312906 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle\") pod \"80c9985b-e915-4819-8355-af9e8076f50a\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.312971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config\") pod \"80c9985b-e915-4819-8355-af9e8076f50a\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313022 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs\") pod \"84c51663-b6c7-4d14-9990-105bf776f49c\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313049 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data\") pod \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313069 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313094 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cktk5\" (UniqueName: \"kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5\") pod \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313135 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd9tg\" (UniqueName: \"kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313170 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313214 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmgw5\" (UniqueName: \"kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5\") pod \"80c9985b-e915-4819-8355-af9e8076f50a\" (UID: \"80c9985b-e915-4819-8355-af9e8076f50a\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313242 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key\") pod \"84c51663-b6c7-4d14-9990-105bf776f49c\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313283 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313310 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts\") pod \"84c51663-b6c7-4d14-9990-105bf776f49c\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313508 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313536 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9hc\" (UniqueName: \"kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc\") pod \"84c51663-b6c7-4d14-9990-105bf776f49c\" (UID: \"84c51663-b6c7-4d14-9990-105bf776f49c\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313559 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts\") pod \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\" (UID: \"c8c85763-5d16-4d14-9ce2-0aa054e701e4\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.313586 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key\") pod \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\" (UID: \"ac2ac2e6-024d-46dd-80c1-92472cf6e116\") " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.314683 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs" (OuterVolumeSpecName: "logs") pod "ac2ac2e6-024d-46dd-80c1-92472cf6e116" (UID: "ac2ac2e6-024d-46dd-80c1-92472cf6e116"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.314902 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts" (OuterVolumeSpecName: "scripts") pod "ac2ac2e6-024d-46dd-80c1-92472cf6e116" (UID: "ac2ac2e6-024d-46dd-80c1-92472cf6e116"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.315159 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.316898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts" (OuterVolumeSpecName: "scripts") pod "84c51663-b6c7-4d14-9990-105bf776f49c" (UID: "84c51663-b6c7-4d14-9990-105bf776f49c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.318204 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data" (OuterVolumeSpecName: "config-data") pod "84c51663-b6c7-4d14-9990-105bf776f49c" (UID: "84c51663-b6c7-4d14-9990-105bf776f49c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.322351 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ac2ac2e6-024d-46dd-80c1-92472cf6e116" (UID: "ac2ac2e6-024d-46dd-80c1-92472cf6e116"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.324838 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5" (OuterVolumeSpecName: "kube-api-access-zmgw5") pod "80c9985b-e915-4819-8355-af9e8076f50a" (UID: "80c9985b-e915-4819-8355-af9e8076f50a"). InnerVolumeSpecName "kube-api-access-zmgw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.325641 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs" (OuterVolumeSpecName: "logs") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.325724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs" (OuterVolumeSpecName: "logs") pod "84c51663-b6c7-4d14-9990-105bf776f49c" (UID: "84c51663-b6c7-4d14-9990-105bf776f49c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.326093 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts" (OuterVolumeSpecName: "scripts") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.326343 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data" (OuterVolumeSpecName: "config-data") pod "ac2ac2e6-024d-46dd-80c1-92472cf6e116" (UID: "ac2ac2e6-024d-46dd-80c1-92472cf6e116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.331042 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg" (OuterVolumeSpecName: "kube-api-access-wd9tg") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "kube-api-access-wd9tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.332996 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.335649 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc" (OuterVolumeSpecName: "kube-api-access-fw9hc") pod "84c51663-b6c7-4d14-9990-105bf776f49c" (UID: "84c51663-b6c7-4d14-9990-105bf776f49c"). InnerVolumeSpecName "kube-api-access-fw9hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.341133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5" (OuterVolumeSpecName: "kube-api-access-cktk5") pod "ac2ac2e6-024d-46dd-80c1-92472cf6e116" (UID: "ac2ac2e6-024d-46dd-80c1-92472cf6e116"). InnerVolumeSpecName "kube-api-access-cktk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.347950 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.355439 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "84c51663-b6c7-4d14-9990-105bf776f49c" (UID: "84c51663-b6c7-4d14-9990-105bf776f49c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.358847 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80c9985b-e915-4819-8355-af9e8076f50a" (UID: "80c9985b-e915-4819-8355-af9e8076f50a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.361376 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config" (OuterVolumeSpecName: "config") pod "80c9985b-e915-4819-8355-af9e8076f50a" (UID: "80c9985b-e915-4819-8355-af9e8076f50a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.390306 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.400700 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data" (OuterVolumeSpecName: "config-data") pod "c8c85763-5d16-4d14-9ce2-0aa054e701e4" (UID: "c8c85763-5d16-4d14-9ce2-0aa054e701e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.414717 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9ddff4dd7-zxsgk" event={"ID":"84c51663-b6c7-4d14-9990-105bf776f49c","Type":"ContainerDied","Data":"37964e082bde755d1209a386886fc5fe3d354996fc378ae5df270c38c2618eda"} Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.414831 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9ddff4dd7-zxsgk" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416232 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac2ac2e6-024d-46dd-80c1-92472cf6e116-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416288 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416300 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416308 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416316 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/80c9985b-e915-4819-8355-af9e8076f50a-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416324 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84c51663-b6c7-4d14-9990-105bf776f49c-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416332 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ac2ac2e6-024d-46dd-80c1-92472cf6e116-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416341 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416350 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cktk5\" (UniqueName: \"kubernetes.io/projected/ac2ac2e6-024d-46dd-80c1-92472cf6e116-kube-api-access-cktk5\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416358 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd9tg\" (UniqueName: \"kubernetes.io/projected/c8c85763-5d16-4d14-9ce2-0aa054e701e4-kube-api-access-wd9tg\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416379 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416388 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmgw5\" (UniqueName: \"kubernetes.io/projected/80c9985b-e915-4819-8355-af9e8076f50a-kube-api-access-zmgw5\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416396 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/84c51663-b6c7-4d14-9990-105bf776f49c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416406 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416414 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416421 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416429 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9hc\" (UniqueName: \"kubernetes.io/projected/84c51663-b6c7-4d14-9990-105bf776f49c-kube-api-access-fw9hc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416437 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8c85763-5d16-4d14-9ce2-0aa054e701e4-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416444 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ac2ac2e6-024d-46dd-80c1-92472cf6e116-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416452 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84c51663-b6c7-4d14-9990-105bf776f49c-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.416459 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c8c85763-5d16-4d14-9ce2-0aa054e701e4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.420634 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.420651 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c8c85763-5d16-4d14-9ce2-0aa054e701e4","Type":"ContainerDied","Data":"9ab4b9d00c16d9ee90bfaa3301579c0c125081a3456d4c101fd666d48487e23f"} Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.420694 4687 scope.go:117] "RemoveContainer" containerID="bdb38df354603f3304556c9524dd7477bed3f3f59b81c5f5338e9f3fb5e092c4" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.426598 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hsnj2" event={"ID":"80c9985b-e915-4819-8355-af9e8076f50a","Type":"ContainerDied","Data":"ceab3b0f690e9ffefb24c989a53980baaba9e0bc5aa1e141ae50aeba8ca9fda8"} Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.426631 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceab3b0f690e9ffefb24c989a53980baaba9e0bc5aa1e141ae50aeba8ca9fda8" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.426684 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hsnj2" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.434462 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b9f7ddcd5-q82d2" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.436578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b9f7ddcd5-q82d2" event={"ID":"ac2ac2e6-024d-46dd-80c1-92472cf6e116","Type":"ContainerDied","Data":"434eb40e38d9fc287dd4c903c60442580a4e29e76969956493b5d4804d5992c5"} Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.471321 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.526860 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.526911 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9ddff4dd7-zxsgk"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.554580 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.580970 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.608234 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649045 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.649541 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-log" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649559 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-log" Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.649593 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-httpd" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649604 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-httpd" Dec 03 17:59:34 crc kubenswrapper[4687]: E1203 17:59:34.649621 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c9985b-e915-4819-8355-af9e8076f50a" containerName="neutron-db-sync" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649629 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c9985b-e915-4819-8355-af9e8076f50a" containerName="neutron-db-sync" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649875 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-log" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649891 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c9985b-e915-4819-8355-af9e8076f50a" containerName="neutron-db-sync" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.649908 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" containerName="glance-httpd" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.651932 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.658026 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.658677 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.672980 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.690920 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b9f7ddcd5-q82d2"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.701387 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.713846 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761062 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761103 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761145 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761179 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761236 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnt4c\" (UniqueName: \"kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761259 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761275 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.761314 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863311 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnt4c\" (UniqueName: \"kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863360 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863421 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863483 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.863570 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.864508 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.864647 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.864847 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.875513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.877172 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.888420 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.889181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnt4c\" (UniqueName: \"kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.891371 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.919757 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " pod="openstack/glance-default-external-api-0" Dec 03 17:59:34 crc kubenswrapper[4687]: I1203 17:59:34.974472 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.440981 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c51663-b6c7-4d14-9990-105bf776f49c" path="/var/lib/kubelet/pods/84c51663-b6c7-4d14-9990-105bf776f49c/volumes" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.441412 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2ac2e6-024d-46dd-80c1-92472cf6e116" path="/var/lib/kubelet/pods/ac2ac2e6-024d-46dd-80c1-92472cf6e116/volumes" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.441845 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c85763-5d16-4d14-9ce2-0aa054e701e4" path="/var/lib/kubelet/pods/c8c85763-5d16-4d14-9ce2-0aa054e701e4/volumes" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.523269 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.524936 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.571802 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.674399 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.676137 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.679672 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4cs5r" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.679849 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.679976 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.680337 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683795 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683832 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p5bd\" (UniqueName: \"kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683980 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.683996 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.686691 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785736 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785789 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785831 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785899 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p5bd\" (UniqueName: \"kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785936 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.785961 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hhjg\" (UniqueName: \"kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.786010 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.786070 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.786107 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.786147 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.786823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.787003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.787406 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.787699 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.787815 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.807077 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p5bd\" (UniqueName: \"kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd\") pod \"dnsmasq-dns-55f844cf75-crlv7\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.876213 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.887999 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.888051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.888096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.888210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hhjg\" (UniqueName: \"kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.888292 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.892285 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.894431 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.895454 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.903103 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:35 crc kubenswrapper[4687]: I1203 17:59:35.923020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hhjg\" (UniqueName: \"kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg\") pod \"neutron-66484d5554-njnbk\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.026680 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:36 crc kubenswrapper[4687]: W1203 17:59:36.239408 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2559a1aa_62c1_43b3_9183_66ebe4d8efc9.slice/crio-f12a5e5077f32685f2916c5b1125ff6c0b85114ba7ce323266c039f28d6b5c81 WatchSource:0}: Error finding container f12a5e5077f32685f2916c5b1125ff6c0b85114ba7ce323266c039f28d6b5c81: Status 404 returned error can't find the container with id f12a5e5077f32685f2916c5b1125ff6c0b85114ba7ce323266c039f28d6b5c81 Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.251557 4687 scope.go:117] "RemoveContainer" containerID="c3582a4d620e2e933da6ed2868264d16e1fa97dc909e24f9cca82df1e22c246e" Dec 03 17:59:36 crc kubenswrapper[4687]: E1203 17:59:36.272933 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 03 17:59:36 crc kubenswrapper[4687]: E1203 17:59:36.273082 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4srk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-schhv_openstack(67159b4a-2e66-424e-9e93-4863da0f5b56): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 17:59:36 crc kubenswrapper[4687]: E1203 17:59:36.274185 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-schhv" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.280451 4687 scope.go:117] "RemoveContainer" containerID="d2baac316fa55451abd58f638cb9814300a13e16dde014174b5da373d3ed5132" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.411109 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.414585 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498023 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs\") pod \"2b07a487-1a41-44de-ac80-4a8f2d26483a\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts\") pod \"2b07a487-1a41-44de-ac80-4a8f2d26483a\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498333 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data\") pod \"2b07a487-1a41-44de-ac80-4a8f2d26483a\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498364 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttww\" (UniqueName: \"kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww\") pod \"2b07a487-1a41-44de-ac80-4a8f2d26483a\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498429 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key\") pod \"2b07a487-1a41-44de-ac80-4a8f2d26483a\" (UID: \"2b07a487-1a41-44de-ac80-4a8f2d26483a\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498451 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5bzt\" (UniqueName: \"kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.498496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0\") pod \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\" (UID: \"4e1edd4e-b3e9-40ca-8cb1-86380336a2db\") " Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.507309 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww" (OuterVolumeSpecName: "kube-api-access-2ttww") pod "2b07a487-1a41-44de-ac80-4a8f2d26483a" (UID: "2b07a487-1a41-44de-ac80-4a8f2d26483a"). InnerVolumeSpecName "kube-api-access-2ttww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.507840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs" (OuterVolumeSpecName: "logs") pod "2b07a487-1a41-44de-ac80-4a8f2d26483a" (UID: "2b07a487-1a41-44de-ac80-4a8f2d26483a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.508192 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts" (OuterVolumeSpecName: "scripts") pod "2b07a487-1a41-44de-ac80-4a8f2d26483a" (UID: "2b07a487-1a41-44de-ac80-4a8f2d26483a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.524018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data" (OuterVolumeSpecName: "config-data") pod "2b07a487-1a41-44de-ac80-4a8f2d26483a" (UID: "2b07a487-1a41-44de-ac80-4a8f2d26483a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.532338 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt" (OuterVolumeSpecName: "kube-api-access-q5bzt") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "kube-api-access-q5bzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.533668 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2b07a487-1a41-44de-ac80-4a8f2d26483a" (UID: "2b07a487-1a41-44de-ac80-4a8f2d26483a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.557565 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" event={"ID":"4e1edd4e-b3e9-40ca-8cb1-86380336a2db","Type":"ContainerDied","Data":"bf93da3441ac2b7d3014e0db8c8fdb9a2e643787f70cf032bc05fbe1caaa0d34"} Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.557651 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-w94zw" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.572310 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerStarted","Data":"f12a5e5077f32685f2916c5b1125ff6c0b85114ba7ce323266c039f28d6b5c81"} Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.577092 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.577444 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6596889657-28n98" event={"ID":"2b07a487-1a41-44de-ac80-4a8f2d26483a","Type":"ContainerDied","Data":"c5077cad09a01352e9fab1ec600a04d84dd0ef4a3183638e6522adcb608855bf"} Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.577462 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6596889657-28n98" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.579748 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6968cc7b7b-57qh6"] Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.581030 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config" (OuterVolumeSpecName: "config") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: E1203 17:59:36.592703 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-schhv" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601335 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601367 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b07a487-1a41-44de-ac80-4a8f2d26483a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601380 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ttww\" (UniqueName: \"kubernetes.io/projected/2b07a487-1a41-44de-ac80-4a8f2d26483a-kube-api-access-2ttww\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601392 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2b07a487-1a41-44de-ac80-4a8f2d26483a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601401 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601413 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5bzt\" (UniqueName: \"kubernetes.io/projected/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-kube-api-access-q5bzt\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601424 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.601434 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b07a487-1a41-44de-ac80-4a8f2d26483a-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.603662 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.606739 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.622585 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e1edd4e-b3e9-40ca-8cb1-86380336a2db" (UID: "4e1edd4e-b3e9-40ca-8cb1-86380336a2db"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.669726 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.676257 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6596889657-28n98"] Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.703091 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.703146 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.703163 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e1edd4e-b3e9-40ca-8cb1-86380336a2db-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.892882 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.901799 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-w94zw"] Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.983640 4687 scope.go:117] "RemoveContainer" containerID="3538985e90b2abd590e86e26cf646b003bd893e2467a8de7ce58eb1abaf1a7fe" Dec 03 17:59:36 crc kubenswrapper[4687]: I1203 17:59:36.999025 4687 scope.go:117] "RemoveContainer" containerID="f25ee889695438f95bd0958bb5c5ffd30cf7cde3ad8173c96ac248f9b372b769" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.172848 4687 scope.go:117] "RemoveContainer" containerID="13b2ea95ad88f7f4509ff4daf1b6684e49058fcfb3ed02f032aae4b55fd66120" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.420465 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b07a487-1a41-44de-ac80-4a8f2d26483a" path="/var/lib/kubelet/pods/2b07a487-1a41-44de-ac80-4a8f2d26483a/volumes" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.421146 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" path="/var/lib/kubelet/pods/4e1edd4e-b3e9-40ca-8cb1-86380336a2db/volumes" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.529867 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-6tmxl"] Dec 03 17:59:37 crc kubenswrapper[4687]: W1203 17:59:37.543745 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23273387_49bc_4a7e_b07a_5695d947eda9.slice/crio-9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b WatchSource:0}: Error finding container 9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b: Status 404 returned error can't find the container with id 9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.559050 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.613455 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tmxl" event={"ID":"23273387-49bc-4a7e-b07a-5695d947eda9","Type":"ContainerStarted","Data":"9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b"} Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.623424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6968cc7b7b-57qh6" event={"ID":"b08dc684-ab9f-41db-a259-2d06b757f3cf","Type":"ContainerStarted","Data":"b50de0134cdb6b29dccb9a1bed8d9e4437fe0ed976f8afdaac49a0db416299d7"} Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.628575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerStarted","Data":"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109"} Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.628767 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-log" containerID="cri-o://d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" gracePeriod=30 Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.629032 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-httpd" containerID="cri-o://82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" gracePeriod=30 Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.644423 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m67k4" event={"ID":"a59cd24e-e105-48b8-a084-909b0dca97c0","Type":"ContainerStarted","Data":"3ac4ddadf524375d31ce336a55d2747fa18a9eddd54fe44fe7ba1304ed0a1919"} Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.670678 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.670657322 podStartE2EDuration="31.670657322s" podCreationTimestamp="2025-12-03 17:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:37.663102498 +0000 UTC m=+1210.553798021" watchObservedRunningTime="2025-12-03 17:59:37.670657322 +0000 UTC m=+1210.561352755" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.747107 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m67k4" podStartSLOduration=3.7619183769999998 podStartE2EDuration="32.747084468s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="2025-12-03 17:59:07.27310929 +0000 UTC m=+1180.163804723" lastFinishedPulling="2025-12-03 17:59:36.258275381 +0000 UTC m=+1209.148970814" observedRunningTime="2025-12-03 17:59:37.686421828 +0000 UTC m=+1210.577117261" watchObservedRunningTime="2025-12-03 17:59:37.747084468 +0000 UTC m=+1210.637779901" Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.756416 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 17:59:37 crc kubenswrapper[4687]: I1203 17:59:37.862880 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.275211 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d44b68cb5-gzqxl"] Dec 03 17:59:38 crc kubenswrapper[4687]: E1203 17:59:38.276176 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.276285 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" Dec 03 17:59:38 crc kubenswrapper[4687]: E1203 17:59:38.276373 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="init" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.276437 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="init" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.276704 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1edd4e-b3e9-40ca-8cb1-86380336a2db" containerName="dnsmasq-dns" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.278036 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.286063 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.286450 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.318866 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d44b68cb5-gzqxl"] Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-combined-ca-bundle\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-ovndb-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362494 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362530 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-public-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362553 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-internal-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362576 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-httpd-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.362646 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgg82\" (UniqueName: \"kubernetes.io/projected/120144a6-19ba-4119-9ef7-7c70664c5e0c-kube-api-access-jgg82\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.463843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-ovndb-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464237 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464284 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-internal-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-public-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-httpd-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464398 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgg82\" (UniqueName: \"kubernetes.io/projected/120144a6-19ba-4119-9ef7-7c70664c5e0c-kube-api-access-jgg82\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.464488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-combined-ca-bundle\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.470881 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-internal-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.473573 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-httpd-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.473765 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-combined-ca-bundle\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.482951 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-public-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.483327 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-config\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.484537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/120144a6-19ba-4119-9ef7-7c70664c5e0c-ovndb-tls-certs\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.487318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgg82\" (UniqueName: \"kubernetes.io/projected/120144a6-19ba-4119-9ef7-7c70664c5e0c-kube-api-access-jgg82\") pod \"neutron-7d44b68cb5-gzqxl\" (UID: \"120144a6-19ba-4119-9ef7-7c70664c5e0c\") " pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.502336 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.619563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.643743 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.684377 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tmxl" event={"ID":"23273387-49bc-4a7e-b07a-5695d947eda9","Type":"ContainerStarted","Data":"0baf87078618cb287341d80a79f9ef3afd310e55c86d2b50d7d1c1e383aa87d6"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.689354 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerStarted","Data":"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.696188 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6968cc7b7b-57qh6" event={"ID":"b08dc684-ab9f-41db-a259-2d06b757f3cf","Type":"ContainerStarted","Data":"8c3efe7e883ec3aef22b369db4044297e67afb20e7abb5fa956697534f1930e2"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.696225 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6968cc7b7b-57qh6" event={"ID":"b08dc684-ab9f-41db-a259-2d06b757f3cf","Type":"ContainerStarted","Data":"47910bc826dc71b7efc15efad1035f742cd15bb87cc2689ee16bbbe8f74da4e4"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.700900 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerStarted","Data":"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.700966 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerStarted","Data":"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719215 4687 generic.go:334] "Generic (PLEG): container finished" podID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerID="82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" exitCode=0 Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719244 4687 generic.go:334] "Generic (PLEG): container finished" podID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerID="d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" exitCode=143 Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719314 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerDied","Data":"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719348 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerDied","Data":"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719358 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"caf2fb27-488f-4976-8cf9-082b37eb90d0","Type":"ContainerDied","Data":"63678f0143317a0e31f3351391d9b75226f3060b094983962258111a47b69d6c"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.719375 4687 scope.go:117] "RemoveContainer" containerID="82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.721908 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-6tmxl" podStartSLOduration=13.72189589 podStartE2EDuration="13.72189589s" podCreationTimestamp="2025-12-03 17:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:38.715707282 +0000 UTC m=+1211.606402715" watchObservedRunningTime="2025-12-03 17:59:38.72189589 +0000 UTC m=+1211.612591323" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.732235 4687 generic.go:334] "Generic (PLEG): container finished" podID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerID="7ecf5c84f3a86bf875189da02f5c815e93ccc29e6adb948808219780e2e7990f" exitCode=0 Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.732319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" event={"ID":"8d6ba5c5-8f42-4aca-8548-f385332049ed","Type":"ContainerDied","Data":"7ecf5c84f3a86bf875189da02f5c815e93ccc29e6adb948808219780e2e7990f"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.732350 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" event={"ID":"8d6ba5c5-8f42-4aca-8548-f385332049ed","Type":"ContainerStarted","Data":"9a0b2ce74461dd095ca29f7d74176b09c4eb3f05ce27e283cec3f829381f3bd2"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.757615 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58975c669d-5qj7w" podStartSLOduration=23.639229362000002 podStartE2EDuration="24.757595965s" podCreationTimestamp="2025-12-03 17:59:14 +0000 UTC" firstStartedPulling="2025-12-03 17:59:36.251452236 +0000 UTC m=+1209.142147679" lastFinishedPulling="2025-12-03 17:59:37.369818849 +0000 UTC m=+1210.260514282" observedRunningTime="2025-12-03 17:59:38.754217324 +0000 UTC m=+1211.644912767" watchObservedRunningTime="2025-12-03 17:59:38.757595965 +0000 UTC m=+1211.648291398" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.758805 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerStarted","Data":"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.758884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerStarted","Data":"7fbba7c8e87d11e91aef35e41870b832abec4290ca3e2e71dc8ceac52b328284"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.763645 4687 scope.go:117] "RemoveContainer" containerID="d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.766734 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerStarted","Data":"c95cc206bdc64ca5ea050ed1dcfcf393504ff24de192f5917a92d6c0cf75ae67"} Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.769729 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.769813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.769956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.770022 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.770045 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvjpq\" (UniqueName: \"kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.770170 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.770202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.770236 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data\") pod \"caf2fb27-488f-4976-8cf9-082b37eb90d0\" (UID: \"caf2fb27-488f-4976-8cf9-082b37eb90d0\") " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.771996 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs" (OuterVolumeSpecName: "logs") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.774396 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.781746 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq" (OuterVolumeSpecName: "kube-api-access-lvjpq") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "kube-api-access-lvjpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.782167 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts" (OuterVolumeSpecName: "scripts") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.782875 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.793433 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6968cc7b7b-57qh6" podStartSLOduration=24.206784335 podStartE2EDuration="24.793412053s" podCreationTimestamp="2025-12-03 17:59:14 +0000 UTC" firstStartedPulling="2025-12-03 17:59:36.927485282 +0000 UTC m=+1209.818180715" lastFinishedPulling="2025-12-03 17:59:37.514113 +0000 UTC m=+1210.404808433" observedRunningTime="2025-12-03 17:59:38.780580677 +0000 UTC m=+1211.671276100" watchObservedRunningTime="2025-12-03 17:59:38.793412053 +0000 UTC m=+1211.684107486" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.872325 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.874105 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/caf2fb27-488f-4976-8cf9-082b37eb90d0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.874167 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvjpq\" (UniqueName: \"kubernetes.io/projected/caf2fb27-488f-4976-8cf9-082b37eb90d0-kube-api-access-lvjpq\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.874183 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.874207 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.876457 4687 scope.go:117] "RemoveContainer" containerID="82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" Dec 03 17:59:38 crc kubenswrapper[4687]: E1203 17:59:38.886134 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109\": container with ID starting with 82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109 not found: ID does not exist" containerID="82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.886184 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109"} err="failed to get container status \"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109\": rpc error: code = NotFound desc = could not find container \"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109\": container with ID starting with 82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109 not found: ID does not exist" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.886216 4687 scope.go:117] "RemoveContainer" containerID="d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" Dec 03 17:59:38 crc kubenswrapper[4687]: E1203 17:59:38.890249 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7\": container with ID starting with d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7 not found: ID does not exist" containerID="d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.890269 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7"} err="failed to get container status \"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7\": rpc error: code = NotFound desc = could not find container \"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7\": container with ID starting with d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7 not found: ID does not exist" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.890284 4687 scope.go:117] "RemoveContainer" containerID="82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.890609 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109"} err="failed to get container status \"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109\": rpc error: code = NotFound desc = could not find container \"82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109\": container with ID starting with 82ff7dd65c6fb02d0a397722bbedfd7568e6f31da7e04385be64c7b4eb6b8109 not found: ID does not exist" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.890651 4687 scope.go:117] "RemoveContainer" containerID="d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.891054 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7"} err="failed to get container status \"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7\": rpc error: code = NotFound desc = could not find container \"d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7\": container with ID starting with d758e9b1823573af8c0187360f62076c7230e69a878e7bf2c62c0bc7cbdf07e7 not found: ID does not exist" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.897943 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.923337 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.963265 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.975070 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data" (OuterVolumeSpecName: "config-data") pod "caf2fb27-488f-4976-8cf9-082b37eb90d0" (UID: "caf2fb27-488f-4976-8cf9-082b37eb90d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.977629 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.977855 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.977957 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caf2fb27-488f-4976-8cf9-082b37eb90d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:38 crc kubenswrapper[4687]: I1203 17:59:38.978059 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.107188 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.141910 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.168200 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:39 crc kubenswrapper[4687]: E1203 17:59:39.168743 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-log" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.168758 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-log" Dec 03 17:59:39 crc kubenswrapper[4687]: E1203 17:59:39.168815 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-httpd" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.168824 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-httpd" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.169819 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-log" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.169852 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" containerName="glance-httpd" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.171321 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.175184 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.175427 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.189859 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284665 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284741 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284807 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwbxc\" (UniqueName: \"kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284821 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.284852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388389 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388473 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388494 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwbxc\" (UniqueName: \"kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.388590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.389062 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.392376 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.392636 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.399047 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.417474 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.422944 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.426104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwbxc\" (UniqueName: \"kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.432249 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.433552 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caf2fb27-488f-4976-8cf9-082b37eb90d0" path="/var/lib/kubelet/pods/caf2fb27-488f-4976-8cf9-082b37eb90d0/volumes" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.441527 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.459904 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d44b68cb5-gzqxl"] Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.608635 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.841113 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" event={"ID":"8d6ba5c5-8f42-4aca-8548-f385332049ed","Type":"ContainerStarted","Data":"185938becf33e1c70a4682ecdf043839ee9457e50a8e6476267693b077ea2043"} Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.842193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.853388 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d44b68cb5-gzqxl" event={"ID":"120144a6-19ba-4119-9ef7-7c70664c5e0c","Type":"ContainerStarted","Data":"2a25b0ff32ae6fc81075b7192298ce037918f3265c181ab5446df9e67edaf2e5"} Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.866860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerStarted","Data":"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b"} Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.867022 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66484d5554-njnbk" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.899303 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66484d5554-njnbk" podStartSLOduration=4.899286839 podStartE2EDuration="4.899286839s" podCreationTimestamp="2025-12-03 17:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:39.898349083 +0000 UTC m=+1212.789044506" watchObservedRunningTime="2025-12-03 17:59:39.899286839 +0000 UTC m=+1212.789982272" Dec 03 17:59:39 crc kubenswrapper[4687]: I1203 17:59:39.901155 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" podStartSLOduration=4.901150009 podStartE2EDuration="4.901150009s" podCreationTimestamp="2025-12-03 17:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:39.87494463 +0000 UTC m=+1212.765640063" watchObservedRunningTime="2025-12-03 17:59:39.901150009 +0000 UTC m=+1212.791845442" Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.478558 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.939390 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d44b68cb5-gzqxl" event={"ID":"120144a6-19ba-4119-9ef7-7c70664c5e0c","Type":"ContainerStarted","Data":"b8352c32ca2a6d87d1a7415be89d4ea44470522ac022bd40ed91ca204ee375ec"} Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.939439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d44b68cb5-gzqxl" event={"ID":"120144a6-19ba-4119-9ef7-7c70664c5e0c","Type":"ContainerStarted","Data":"5df10a82456fbf7a7c3509e9b7170e82b54749078431265aeec023360d1c3dff"} Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.939680 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.942616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerStarted","Data":"b8b8e9567bb8052c7af385d033a0bd56025ab703e11a800592b0aa5a4ea127c3"} Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.947186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerStarted","Data":"85f06f6d7cd50cfabfc1e2248813b39313dfa93ff7ad39b11a69224f21da520e"} Dec 03 17:59:40 crc kubenswrapper[4687]: I1203 17:59:40.959979 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d44b68cb5-gzqxl" podStartSLOduration=2.959962861 podStartE2EDuration="2.959962861s" podCreationTimestamp="2025-12-03 17:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:40.957094094 +0000 UTC m=+1213.847789527" watchObservedRunningTime="2025-12-03 17:59:40.959962861 +0000 UTC m=+1213.850658284" Dec 03 17:59:41 crc kubenswrapper[4687]: I1203 17:59:41.963636 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerStarted","Data":"438523e3d2999130dea41de7ac0d605343b6151204268e021bf10fa5e804885a"} Dec 03 17:59:41 crc kubenswrapper[4687]: I1203 17:59:41.968834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerStarted","Data":"c9b6710490130851c7c0c4cd38651ab01ce2c4618dc004e0ec2c0ec17b932425"} Dec 03 17:59:41 crc kubenswrapper[4687]: I1203 17:59:41.974990 4687 generic.go:334] "Generic (PLEG): container finished" podID="a59cd24e-e105-48b8-a084-909b0dca97c0" containerID="3ac4ddadf524375d31ce336a55d2747fa18a9eddd54fe44fe7ba1304ed0a1919" exitCode=0 Dec 03 17:59:41 crc kubenswrapper[4687]: I1203 17:59:41.975916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m67k4" event={"ID":"a59cd24e-e105-48b8-a084-909b0dca97c0","Type":"ContainerDied","Data":"3ac4ddadf524375d31ce336a55d2747fa18a9eddd54fe44fe7ba1304ed0a1919"} Dec 03 17:59:41 crc kubenswrapper[4687]: I1203 17:59:41.999085 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.999059291 podStartE2EDuration="7.999059291s" podCreationTimestamp="2025-12-03 17:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:41.990194472 +0000 UTC m=+1214.880889925" watchObservedRunningTime="2025-12-03 17:59:41.999059291 +0000 UTC m=+1214.889754724" Dec 03 17:59:42 crc kubenswrapper[4687]: I1203 17:59:42.985361 4687 generic.go:334] "Generic (PLEG): container finished" podID="23273387-49bc-4a7e-b07a-5695d947eda9" containerID="0baf87078618cb287341d80a79f9ef3afd310e55c86d2b50d7d1c1e383aa87d6" exitCode=0 Dec 03 17:59:42 crc kubenswrapper[4687]: I1203 17:59:42.985430 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tmxl" event={"ID":"23273387-49bc-4a7e-b07a-5695d947eda9","Type":"ContainerDied","Data":"0baf87078618cb287341d80a79f9ef3afd310e55c86d2b50d7d1c1e383aa87d6"} Dec 03 17:59:42 crc kubenswrapper[4687]: I1203 17:59:42.989160 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2flgf" event={"ID":"f34993b1-3135-46ef-9f85-9ab7525b1682","Type":"ContainerStarted","Data":"bcdd0f4ca1412d82ddf81ab61553982e5cefa09d6e8092776bc27707f3de2715"} Dec 03 17:59:42 crc kubenswrapper[4687]: I1203 17:59:42.992404 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerStarted","Data":"be7d3f13d113d001caffcacb29157eb5808f0aa43792c298dc3540709114c41f"} Dec 03 17:59:43 crc kubenswrapper[4687]: I1203 17:59:43.016295 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2flgf" podStartSLOduration=3.813791171 podStartE2EDuration="38.01627845s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="2025-12-03 17:59:07.273050619 +0000 UTC m=+1180.163746062" lastFinishedPulling="2025-12-03 17:59:41.475537908 +0000 UTC m=+1214.366233341" observedRunningTime="2025-12-03 17:59:43.015311223 +0000 UTC m=+1215.906006646" watchObservedRunningTime="2025-12-03 17:59:43.01627845 +0000 UTC m=+1215.906973883" Dec 03 17:59:43 crc kubenswrapper[4687]: I1203 17:59:43.037218 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.037202575 podStartE2EDuration="4.037202575s" podCreationTimestamp="2025-12-03 17:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:43.034573494 +0000 UTC m=+1215.925268957" watchObservedRunningTime="2025-12-03 17:59:43.037202575 +0000 UTC m=+1215.927898008" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.395796 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.396270 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.484518 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.485279 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.977918 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 17:59:44 crc kubenswrapper[4687]: I1203 17:59:44.977969 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.033100 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.034073 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.040768 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.693235 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822240 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-647ct\" (UniqueName: \"kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct\") pod \"a59cd24e-e105-48b8-a084-909b0dca97c0\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822311 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle\") pod \"a59cd24e-e105-48b8-a084-909b0dca97c0\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822348 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts\") pod \"a59cd24e-e105-48b8-a084-909b0dca97c0\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822398 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs\") pod \"a59cd24e-e105-48b8-a084-909b0dca97c0\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822796 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs" (OuterVolumeSpecName: "logs") pod "a59cd24e-e105-48b8-a084-909b0dca97c0" (UID: "a59cd24e-e105-48b8-a084-909b0dca97c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.822890 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data\") pod \"a59cd24e-e105-48b8-a084-909b0dca97c0\" (UID: \"a59cd24e-e105-48b8-a084-909b0dca97c0\") " Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.823548 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a59cd24e-e105-48b8-a084-909b0dca97c0-logs\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.831297 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct" (OuterVolumeSpecName: "kube-api-access-647ct") pod "a59cd24e-e105-48b8-a084-909b0dca97c0" (UID: "a59cd24e-e105-48b8-a084-909b0dca97c0"). InnerVolumeSpecName "kube-api-access-647ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.841995 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts" (OuterVolumeSpecName: "scripts") pod "a59cd24e-e105-48b8-a084-909b0dca97c0" (UID: "a59cd24e-e105-48b8-a084-909b0dca97c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.852305 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a59cd24e-e105-48b8-a084-909b0dca97c0" (UID: "a59cd24e-e105-48b8-a084-909b0dca97c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.867687 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data" (OuterVolumeSpecName: "config-data") pod "a59cd24e-e105-48b8-a084-909b0dca97c0" (UID: "a59cd24e-e105-48b8-a084-909b0dca97c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.879410 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.925322 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-647ct\" (UniqueName: \"kubernetes.io/projected/a59cd24e-e105-48b8-a084-909b0dca97c0-kube-api-access-647ct\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.925355 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.925365 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.925373 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a59cd24e-e105-48b8-a084-909b0dca97c0-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.957326 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:45 crc kubenswrapper[4687]: I1203 17:59:45.957554 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="dnsmasq-dns" containerID="cri-o://b88f7a8660a7ad950ee492fd13e324a083575b5e87405bacaf0a1829f2d97bba" gracePeriod=10 Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.030499 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m67k4" event={"ID":"a59cd24e-e105-48b8-a084-909b0dca97c0","Type":"ContainerDied","Data":"056e8c944fc4ecb0bbee8e2995299c9caa860df7767db90c6b53c52caa8122ad"} Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.030780 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="056e8c944fc4ecb0bbee8e2995299c9caa860df7767db90c6b53c52caa8122ad" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.031548 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m67k4" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.032835 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.432721 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.876547 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-699567968b-hhzfv"] Dec 03 17:59:46 crc kubenswrapper[4687]: E1203 17:59:46.876907 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59cd24e-e105-48b8-a084-909b0dca97c0" containerName="placement-db-sync" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.876917 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59cd24e-e105-48b8-a084-909b0dca97c0" containerName="placement-db-sync" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.877097 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59cd24e-e105-48b8-a084-909b0dca97c0" containerName="placement-db-sync" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.877960 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.882176 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.882180 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.882262 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.884336 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-46dxr" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.884632 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.893336 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699567968b-hhzfv"] Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.941909 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-scripts\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.941974 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dbaeab-7905-40ae-9e1e-3674573a1aa3-logs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.941993 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-combined-ca-bundle\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.942056 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-config-data\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.942192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5r9\" (UniqueName: \"kubernetes.io/projected/66dbaeab-7905-40ae-9e1e-3674573a1aa3-kube-api-access-mj5r9\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.942324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-internal-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:46 crc kubenswrapper[4687]: I1203 17:59:46.942388 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-public-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.041697 4687 generic.go:334] "Generic (PLEG): container finished" podID="f34993b1-3135-46ef-9f85-9ab7525b1682" containerID="bcdd0f4ca1412d82ddf81ab61553982e5cefa09d6e8092776bc27707f3de2715" exitCode=0 Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.041753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2flgf" event={"ID":"f34993b1-3135-46ef-9f85-9ab7525b1682","Type":"ContainerDied","Data":"bcdd0f4ca1412d82ddf81ab61553982e5cefa09d6e8092776bc27707f3de2715"} Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043551 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5r9\" (UniqueName: \"kubernetes.io/projected/66dbaeab-7905-40ae-9e1e-3674573a1aa3-kube-api-access-mj5r9\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043643 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-internal-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043685 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-public-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043720 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-scripts\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dbaeab-7905-40ae-9e1e-3674573a1aa3-logs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043754 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-combined-ca-bundle\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.043781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-config-data\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.048654 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/66dbaeab-7905-40ae-9e1e-3674573a1aa3-logs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.049135 4687 generic.go:334] "Generic (PLEG): container finished" podID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerID="b88f7a8660a7ad950ee492fd13e324a083575b5e87405bacaf0a1829f2d97bba" exitCode=0 Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.049250 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-internal-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.049257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" event={"ID":"1cae5f8b-6d0e-4f66-867a-7d7288528ce4","Type":"ContainerDied","Data":"b88f7a8660a7ad950ee492fd13e324a083575b5e87405bacaf0a1829f2d97bba"} Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.060828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-combined-ca-bundle\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.061988 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-scripts\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.066626 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-public-tls-certs\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.068529 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66dbaeab-7905-40ae-9e1e-3674573a1aa3-config-data\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.093735 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5r9\" (UniqueName: \"kubernetes.io/projected/66dbaeab-7905-40ae-9e1e-3674573a1aa3-kube-api-access-mj5r9\") pod \"placement-699567968b-hhzfv\" (UID: \"66dbaeab-7905-40ae-9e1e-3674573a1aa3\") " pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:47 crc kubenswrapper[4687]: I1203 17:59:47.198581 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:48 crc kubenswrapper[4687]: I1203 17:59:48.407878 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 17:59:48 crc kubenswrapper[4687]: I1203 17:59:48.408508 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:59:48 crc kubenswrapper[4687]: I1203 17:59:48.824887 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.271889 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.286459 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386408 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data\") pod \"f34993b1-3135-46ef-9f85-9ab7525b1682\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386496 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsh9p\" (UniqueName: \"kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p\") pod \"f34993b1-3135-46ef-9f85-9ab7525b1682\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386583 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle\") pod \"f34993b1-3135-46ef-9f85-9ab7525b1682\" (UID: \"f34993b1-3135-46ef-9f85-9ab7525b1682\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386616 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386638 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386682 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386739 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386767 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6s2\" (UniqueName: \"kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.386827 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys\") pod \"23273387-49bc-4a7e-b07a-5695d947eda9\" (UID: \"23273387-49bc-4a7e-b07a-5695d947eda9\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.393386 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.396013 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f34993b1-3135-46ef-9f85-9ab7525b1682" (UID: "f34993b1-3135-46ef-9f85-9ab7525b1682"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.396390 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p" (OuterVolumeSpecName: "kube-api-access-nsh9p") pod "f34993b1-3135-46ef-9f85-9ab7525b1682" (UID: "f34993b1-3135-46ef-9f85-9ab7525b1682"). InnerVolumeSpecName "kube-api-access-nsh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.399901 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.405237 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts" (OuterVolumeSpecName: "scripts") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.415714 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2" (OuterVolumeSpecName: "kube-api-access-xq6s2") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "kube-api-access-xq6s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.456202 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data" (OuterVolumeSpecName: "config-data") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.471056 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f34993b1-3135-46ef-9f85-9ab7525b1682" (UID: "f34993b1-3135-46ef-9f85-9ab7525b1682"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.489950 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsh9p\" (UniqueName: \"kubernetes.io/projected/f34993b1-3135-46ef-9f85-9ab7525b1682-kube-api-access-nsh9p\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.489983 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.489994 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.490003 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.490012 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.490020 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq6s2\" (UniqueName: \"kubernetes.io/projected/23273387-49bc-4a7e-b07a-5695d947eda9-kube-api-access-xq6s2\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.490028 4687 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.490036 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f34993b1-3135-46ef-9f85-9ab7525b1682-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.496118 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23273387-49bc-4a7e-b07a-5695d947eda9" (UID: "23273387-49bc-4a7e-b07a-5695d947eda9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.567535 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593274 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593428 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whtjl\" (UniqueName: \"kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593546 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593594 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.593624 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc\") pod \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\" (UID: \"1cae5f8b-6d0e-4f66-867a-7d7288528ce4\") " Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.594103 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23273387-49bc-4a7e-b07a-5695d947eda9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.609421 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.609482 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.617869 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl" (OuterVolumeSpecName: "kube-api-access-whtjl") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "kube-api-access-whtjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.695359 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whtjl\" (UniqueName: \"kubernetes.io/projected/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-kube-api-access-whtjl\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.731956 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.736676 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.743456 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.761712 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.774686 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.777334 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.783113 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config" (OuterVolumeSpecName: "config") pod "1cae5f8b-6d0e-4f66-867a-7d7288528ce4" (UID: "1cae5f8b-6d0e-4f66-867a-7d7288528ce4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.796405 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.796432 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.796442 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.796451 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.796460 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cae5f8b-6d0e-4f66-867a-7d7288528ce4-config\") on node \"crc\" DevicePath \"\"" Dec 03 17:59:49 crc kubenswrapper[4687]: W1203 17:59:49.899142 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66dbaeab_7905_40ae_9e1e_3674573a1aa3.slice/crio-b3c8826b7b4ded1d8a91463990292338979bd777544b8aa1da0d92a310899a7d WatchSource:0}: Error finding container b3c8826b7b4ded1d8a91463990292338979bd777544b8aa1da0d92a310899a7d: Status 404 returned error can't find the container with id b3c8826b7b4ded1d8a91463990292338979bd777544b8aa1da0d92a310899a7d Dec 03 17:59:49 crc kubenswrapper[4687]: I1203 17:59:49.903523 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-699567968b-hhzfv"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.085831 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-6tmxl" event={"ID":"23273387-49bc-4a7e-b07a-5695d947eda9","Type":"ContainerDied","Data":"9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.085857 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-6tmxl" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.085874 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fe3ed71430fb6d4e70121cf000ade00e7998ff14e7ab4c4cf61263160b49c0b" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.087618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2flgf" event={"ID":"f34993b1-3135-46ef-9f85-9ab7525b1682","Type":"ContainerDied","Data":"594189f8e4b7511f56681ffec35e83a4e1e3d570622ce92f366bcd6138ef82a1"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.087655 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594189f8e4b7511f56681ffec35e83a4e1e3d570622ce92f366bcd6138ef82a1" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.087636 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2flgf" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.094306 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699567968b-hhzfv" event={"ID":"66dbaeab-7905-40ae-9e1e-3674573a1aa3","Type":"ContainerStarted","Data":"b6047935574b4fdd53759ab8fe67047ee609fa2706ea2c4d1f2c8658c606e40c"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.094378 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699567968b-hhzfv" event={"ID":"66dbaeab-7905-40ae-9e1e-3674573a1aa3","Type":"ContainerStarted","Data":"b3c8826b7b4ded1d8a91463990292338979bd777544b8aa1da0d92a310899a7d"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.100976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" event={"ID":"1cae5f8b-6d0e-4f66-867a-7d7288528ce4","Type":"ContainerDied","Data":"59a32ed139506fc1d0e7a47a4202fde515fc39137ac32a1272c26c7760fb4068"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.101024 4687 scope.go:117] "RemoveContainer" containerID="b88f7a8660a7ad950ee492fd13e324a083575b5e87405bacaf0a1829f2d97bba" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.101155 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-bqs26" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.107273 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerStarted","Data":"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47"} Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.107327 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.107491 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.129596 4687 scope.go:117] "RemoveContainer" containerID="6df47922de8f88d0d3cff5b6f18be9a76228f99fe03afd13283baa900f134811" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.137954 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.147541 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-bqs26"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.478644 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7fc787b46b-k9z8g"] Dec 03 17:59:50 crc kubenswrapper[4687]: E1203 17:59:50.479338 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="init" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479355 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="init" Dec 03 17:59:50 crc kubenswrapper[4687]: E1203 17:59:50.479379 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="dnsmasq-dns" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479386 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="dnsmasq-dns" Dec 03 17:59:50 crc kubenswrapper[4687]: E1203 17:59:50.479399 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23273387-49bc-4a7e-b07a-5695d947eda9" containerName="keystone-bootstrap" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479405 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="23273387-49bc-4a7e-b07a-5695d947eda9" containerName="keystone-bootstrap" Dec 03 17:59:50 crc kubenswrapper[4687]: E1203 17:59:50.479420 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" containerName="barbican-db-sync" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479426 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" containerName="barbican-db-sync" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479575 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" containerName="barbican-db-sync" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479618 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="23273387-49bc-4a7e-b07a-5695d947eda9" containerName="keystone-bootstrap" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.479635 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" containerName="dnsmasq-dns" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.480245 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486160 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486294 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486482 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486536 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486692 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ch9hz" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.486770 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.509598 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fc787b46b-k9z8g"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.615930 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7dc58d75dc-vk2m4"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.617334 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619342 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-public-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619421 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q7kx\" (UniqueName: \"kubernetes.io/projected/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-kube-api-access-7q7kx\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619465 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-fernet-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619490 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-scripts\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619539 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-config-data\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-credential-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619667 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-internal-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.619732 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-combined-ca-bundle\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.622538 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-sktmm" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.622721 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.628447 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7b984dc754-pn82p"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.629735 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.630615 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.638155 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.680803 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b984dc754-pn82p"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.728599 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-internal-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.728690 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clx9f\" (UniqueName: \"kubernetes.io/projected/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-kube-api-access-clx9f\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.728738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-combined-ca-bundle\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.728771 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data-custom\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.760022 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-internal-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761171 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-public-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761259 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761286 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q7kx\" (UniqueName: \"kubernetes.io/projected/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-kube-api-access-7q7kx\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761358 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/68f5675a-1ac6-475a-b0ba-b83e975e838f-kube-api-access-dmz6v\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761388 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-fernet-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761414 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-scripts\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761445 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data-custom\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-combined-ca-bundle\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761531 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-config-data\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761581 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f5675a-1ac6-475a-b0ba-b83e975e838f-logs\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761604 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-credential-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761629 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-combined-ca-bundle\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.761798 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-logs\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.770705 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-credential-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.774042 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-config-data\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.774496 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-public-tls-certs\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.774928 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-fernet-keys\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.778498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-scripts\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.797206 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dc58d75dc-vk2m4"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.830855 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q7kx\" (UniqueName: \"kubernetes.io/projected/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-kube-api-access-7q7kx\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.866147 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42536d5c-2479-4f9f-a6ff-d3705bb42b8f-combined-ca-bundle\") pod \"keystone-7fc787b46b-k9z8g\" (UID: \"42536d5c-2479-4f9f-a6ff-d3705bb42b8f\") " pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873137 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data-custom\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873210 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873236 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/68f5675a-1ac6-475a-b0ba-b83e975e838f-kube-api-access-dmz6v\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data-custom\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873309 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-combined-ca-bundle\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873342 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f5675a-1ac6-475a-b0ba-b83e975e838f-logs\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873362 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-combined-ca-bundle\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873404 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-logs\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.873440 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clx9f\" (UniqueName: \"kubernetes.io/projected/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-kube-api-access-clx9f\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.887937 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-combined-ca-bundle\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.889236 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68f5675a-1ac6-475a-b0ba-b83e975e838f-logs\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.893749 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-logs\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.895137 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.897213 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/68f5675a-1ac6-475a-b0ba-b83e975e838f-config-data-custom\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.905270 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-combined-ca-bundle\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.906949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data-custom\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.915058 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmz6v\" (UniqueName: \"kubernetes.io/projected/68f5675a-1ac6-475a-b0ba-b83e975e838f-kube-api-access-dmz6v\") pod \"barbican-keystone-listener-7b984dc754-pn82p\" (UID: \"68f5675a-1ac6-475a-b0ba-b83e975e838f\") " pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.921358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-config-data\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.951822 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clx9f\" (UniqueName: \"kubernetes.io/projected/6f8ac0e6-dadf-44e8-8e92-56c306da2a8e-kube-api-access-clx9f\") pod \"barbican-worker-7dc58d75dc-vk2m4\" (UID: \"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e\") " pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.952287 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.974947 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 17:59:50 crc kubenswrapper[4687]: I1203 17:59:50.976608 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.009805 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.024763 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086296 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086424 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srqd5\" (UniqueName: \"kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086479 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086536 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086584 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.086614 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.099644 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.151293 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.177683 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.177805 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.180385 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192786 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192857 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srqd5\" (UniqueName: \"kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.192890 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.193864 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.193883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.193930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.194659 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.195229 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.223384 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-699567968b-hhzfv" event={"ID":"66dbaeab-7905-40ae-9e1e-3674573a1aa3","Type":"ContainerStarted","Data":"84f87af091a9ed7468d2bccbccf75163e603802395fe41412c9dc6f05034547f"} Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.223775 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.223792 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-699567968b-hhzfv" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.245484 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srqd5\" (UniqueName: \"kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5\") pod \"dnsmasq-dns-85ff748b95-w5ps9\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.264654 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-699567968b-hhzfv" podStartSLOduration=5.264628515 podStartE2EDuration="5.264628515s" podCreationTimestamp="2025-12-03 17:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:51.258849389 +0000 UTC m=+1224.149544822" watchObservedRunningTime="2025-12-03 17:59:51.264628515 +0000 UTC m=+1224.155323948" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.295138 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.295304 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.295344 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55pm\" (UniqueName: \"kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.295400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.295420 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.338994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.399560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.399618 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55pm\" (UniqueName: \"kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.399698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.399727 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.399788 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.405399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.408475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.412740 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.418070 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.446792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55pm\" (UniqueName: \"kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm\") pod \"barbican-api-b9f7f69fd-rlx5z\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.463041 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cae5f8b-6d0e-4f66-867a-7d7288528ce4" path="/var/lib/kubelet/pods/1cae5f8b-6d0e-4f66-867a-7d7288528ce4/volumes" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.526214 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.730604 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dc58d75dc-vk2m4"] Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.825019 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7b984dc754-pn82p"] Dec 03 17:59:51 crc kubenswrapper[4687]: I1203 17:59:51.958787 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7fc787b46b-k9z8g"] Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.055932 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 17:59:52 crc kubenswrapper[4687]: W1203 17:59:52.102571 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9d3b38_1c8e_4946_a25d_22d8428ee1c5.slice/crio-905051cc2ca3cfd50c50aa5f820d0b0d495e5a61c0a199a4cba92b8fb6ebc069 WatchSource:0}: Error finding container 905051cc2ca3cfd50c50aa5f820d0b0d495e5a61c0a199a4cba92b8fb6ebc069: Status 404 returned error can't find the container with id 905051cc2ca3cfd50c50aa5f820d0b0d495e5a61c0a199a4cba92b8fb6ebc069 Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.134327 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.232107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fc787b46b-k9z8g" event={"ID":"42536d5c-2479-4f9f-a6ff-d3705bb42b8f","Type":"ContainerStarted","Data":"d9e9fb2142b5112bb0fa88156f8ca33b7918a98b926155de8255691b689d021e"} Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.235267 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" event={"ID":"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e","Type":"ContainerStarted","Data":"19cf167646d1af4318c8e49b0ec20a8e0fe4f1621ec96eee3d38e4db08cc2b00"} Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.236832 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" event={"ID":"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5","Type":"ContainerStarted","Data":"905051cc2ca3cfd50c50aa5f820d0b0d495e5a61c0a199a4cba92b8fb6ebc069"} Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.238519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerStarted","Data":"c31dffce5e35de304469049efd8857aef9b8b8fe107b182b9a2c935be6dcce0b"} Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.242084 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" event={"ID":"68f5675a-1ac6-475a-b0ba-b83e975e838f","Type":"ContainerStarted","Data":"7103b2d499f868385ebb5f5beafcbbb379172571c055d10742aec97ba397b9d1"} Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.242238 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:59:52 crc kubenswrapper[4687]: I1203 17:59:52.242253 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.241151 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.277439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerStarted","Data":"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d"} Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.277485 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerStarted","Data":"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807"} Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.277500 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.282929 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.291109 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-schhv" event={"ID":"67159b4a-2e66-424e-9e93-4863da0f5b56","Type":"ContainerStarted","Data":"c7ed67c3f470f06a2c5856e8532edb6094d3f342d491f66de49765a3ffba4968"} Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.294861 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7fc787b46b-k9z8g" event={"ID":"42536d5c-2479-4f9f-a6ff-d3705bb42b8f","Type":"ContainerStarted","Data":"09f9f6b0eeae83e2423bab4e644493a0f61e653f399163c5c0a1ec486c15171d"} Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.298266 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.300284 4687 generic.go:334] "Generic (PLEG): container finished" podID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerID="3b145bfceccb9765d693e1189d95d66789c3cc80d37616dcd3f111714ef11dd1" exitCode=0 Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.304021 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" event={"ID":"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5","Type":"ContainerDied","Data":"3b145bfceccb9765d693e1189d95d66789c3cc80d37616dcd3f111714ef11dd1"} Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.305380 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.314968 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.339592 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b9f7f69fd-rlx5z" podStartSLOduration=2.3395674570000002 podStartE2EDuration="2.339567457s" podCreationTimestamp="2025-12-03 17:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:53.294136358 +0000 UTC m=+1226.184831791" watchObservedRunningTime="2025-12-03 17:59:53.339567457 +0000 UTC m=+1226.230262900" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.356552 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-schhv" podStartSLOduration=3.72650089 podStartE2EDuration="48.356533796s" podCreationTimestamp="2025-12-03 17:59:05 +0000 UTC" firstStartedPulling="2025-12-03 17:59:07.272044721 +0000 UTC m=+1180.162740154" lastFinishedPulling="2025-12-03 17:59:51.902077627 +0000 UTC m=+1224.792773060" observedRunningTime="2025-12-03 17:59:53.355984661 +0000 UTC m=+1226.246680114" watchObservedRunningTime="2025-12-03 17:59:53.356533796 +0000 UTC m=+1226.247229229" Dec 03 17:59:53 crc kubenswrapper[4687]: I1203 17:59:53.541212 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7fc787b46b-k9z8g" podStartSLOduration=3.541187597 podStartE2EDuration="3.541187597s" podCreationTimestamp="2025-12-03 17:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:53.509255374 +0000 UTC m=+1226.399950827" watchObservedRunningTime="2025-12-03 17:59:53.541187597 +0000 UTC m=+1226.431883040" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.197575 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f84949b66-zfm22"] Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.207065 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.209450 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.209947 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.243224 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f84949b66-zfm22"] Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.279764 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data-custom\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.279813 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-public-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.279884 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.279928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-internal-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.279983 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-combined-ca-bundle\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.280014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwdx\" (UniqueName: \"kubernetes.io/projected/c37e3f72-636e-4175-a805-8b2aa8f52eca-kube-api-access-pjwdx\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.280227 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e3f72-636e-4175-a805-8b2aa8f52eca-logs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382040 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-combined-ca-bundle\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwdx\" (UniqueName: \"kubernetes.io/projected/c37e3f72-636e-4175-a805-8b2aa8f52eca-kube-api-access-pjwdx\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382284 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e3f72-636e-4175-a805-8b2aa8f52eca-logs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382415 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data-custom\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-public-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.382679 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-internal-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.383022 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e3f72-636e-4175-a805-8b2aa8f52eca-logs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.390515 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-internal-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.391962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-combined-ca-bundle\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.392332 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.397791 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-public-tls-certs\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.403720 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwdx\" (UniqueName: \"kubernetes.io/projected/c37e3f72-636e-4175-a805-8b2aa8f52eca-kube-api-access-pjwdx\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.423672 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.424531 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c37e3f72-636e-4175-a805-8b2aa8f52eca-config-data-custom\") pod \"barbican-api-f84949b66-zfm22\" (UID: \"c37e3f72-636e-4175-a805-8b2aa8f52eca\") " pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.487760 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6968cc7b7b-57qh6" podUID="b08dc684-ab9f-41db-a259-2d06b757f3cf" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Dec 03 17:59:54 crc kubenswrapper[4687]: I1203 17:59:54.533339 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:55 crc kubenswrapper[4687]: I1203 17:59:55.376191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" event={"ID":"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5","Type":"ContainerStarted","Data":"865fd9ff1920180d3d289d4c7e36855a67c7de21ae657bc8effadaf3d2ad612b"} Dec 03 17:59:55 crc kubenswrapper[4687]: I1203 17:59:55.401341 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" podStartSLOduration=5.401324082 podStartE2EDuration="5.401324082s" podCreationTimestamp="2025-12-03 17:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:55.398375462 +0000 UTC m=+1228.289070905" watchObservedRunningTime="2025-12-03 17:59:55.401324082 +0000 UTC m=+1228.292019515" Dec 03 17:59:55 crc kubenswrapper[4687]: I1203 17:59:55.687835 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f84949b66-zfm22"] Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.344213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.432323 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" event={"ID":"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e","Type":"ContainerStarted","Data":"2cc78e491f288f3af055997c0d930fce01fbec5c16deb3f821184be6e9816688"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.432374 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" event={"ID":"6f8ac0e6-dadf-44e8-8e92-56c306da2a8e","Type":"ContainerStarted","Data":"7fb7cf4541ca15e2d5c79566be2821806349eae1b35c397f8b5a83b46fb4034f"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.472186 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7dc58d75dc-vk2m4" podStartSLOduration=3.076659459 podStartE2EDuration="6.47216986s" podCreationTimestamp="2025-12-03 17:59:50 +0000 UTC" firstStartedPulling="2025-12-03 17:59:51.735431562 +0000 UTC m=+1224.626127015" lastFinishedPulling="2025-12-03 17:59:55.130941983 +0000 UTC m=+1228.021637416" observedRunningTime="2025-12-03 17:59:56.471622906 +0000 UTC m=+1229.362318339" watchObservedRunningTime="2025-12-03 17:59:56.47216986 +0000 UTC m=+1229.362865293" Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.473587 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f84949b66-zfm22" event={"ID":"c37e3f72-636e-4175-a805-8b2aa8f52eca","Type":"ContainerStarted","Data":"02535fe64337417e3c8c76dcac028acc380836bbced092b89246d4587ecb54ad"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.473682 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f84949b66-zfm22" event={"ID":"c37e3f72-636e-4175-a805-8b2aa8f52eca","Type":"ContainerStarted","Data":"d96e9e660581f9000b81955daf946f261dcfe33cc8e7c374d001d87176765cbc"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.473740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f84949b66-zfm22" event={"ID":"c37e3f72-636e-4175-a805-8b2aa8f52eca","Type":"ContainerStarted","Data":"497e3fed7c05ae94d6f17791f8af6a6714b6b352c55f63ecf72ed3e3cad8a1f7"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.473816 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.473884 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.501991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" event={"ID":"68f5675a-1ac6-475a-b0ba-b83e975e838f","Type":"ContainerStarted","Data":"f79c8058305edd4445f81fef6acab571a86067a6f300895a0924a549f00f3b0b"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.502030 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" event={"ID":"68f5675a-1ac6-475a-b0ba-b83e975e838f","Type":"ContainerStarted","Data":"3fe547badab9deaa3f3fb71309466323392e48d02214d919439c1c259b342d6e"} Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.530674 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f84949b66-zfm22" podStartSLOduration=2.530653321 podStartE2EDuration="2.530653321s" podCreationTimestamp="2025-12-03 17:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 17:59:56.502326306 +0000 UTC m=+1229.393021739" watchObservedRunningTime="2025-12-03 17:59:56.530653321 +0000 UTC m=+1229.421348754" Dec 03 17:59:56 crc kubenswrapper[4687]: I1203 17:59:56.545259 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7b984dc754-pn82p" podStartSLOduration=3.2701894510000002 podStartE2EDuration="6.545240395s" podCreationTimestamp="2025-12-03 17:59:50 +0000 UTC" firstStartedPulling="2025-12-03 17:59:51.854058919 +0000 UTC m=+1224.744754352" lastFinishedPulling="2025-12-03 17:59:55.129109863 +0000 UTC m=+1228.019805296" observedRunningTime="2025-12-03 17:59:56.53465897 +0000 UTC m=+1229.425354403" watchObservedRunningTime="2025-12-03 17:59:56.545240395 +0000 UTC m=+1229.435935828" Dec 03 17:59:59 crc kubenswrapper[4687]: I1203 17:59:59.535835 4687 generic.go:334] "Generic (PLEG): container finished" podID="67159b4a-2e66-424e-9e93-4863da0f5b56" containerID="c7ed67c3f470f06a2c5856e8532edb6094d3f342d491f66de49765a3ffba4968" exitCode=0 Dec 03 17:59:59 crc kubenswrapper[4687]: I1203 17:59:59.535957 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-schhv" event={"ID":"67159b4a-2e66-424e-9e93-4863da0f5b56","Type":"ContainerDied","Data":"c7ed67c3f470f06a2c5856e8532edb6094d3f342d491f66de49765a3ffba4968"} Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.144423 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55"] Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.146343 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.155324 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.160644 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.168381 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55"] Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.264234 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.264369 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.264434 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft55\" (UniqueName: \"kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.366314 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.366392 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ft55\" (UniqueName: \"kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.366442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.367338 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.383960 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.384612 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ft55\" (UniqueName: \"kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55\") pod \"collect-profiles-29413080-kdc55\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:00 crc kubenswrapper[4687]: I1203 18:00:00.498333 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:01 crc kubenswrapper[4687]: I1203 18:00:01.345305 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 18:00:01 crc kubenswrapper[4687]: I1203 18:00:01.428253 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 18:00:01 crc kubenswrapper[4687]: I1203 18:00:01.431022 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="dnsmasq-dns" containerID="cri-o://185938becf33e1c70a4682ecdf043839ee9457e50a8e6476267693b077ea2043" gracePeriod=10 Dec 03 18:00:02 crc kubenswrapper[4687]: I1203 18:00:02.579894 4687 generic.go:334] "Generic (PLEG): container finished" podID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerID="185938becf33e1c70a4682ecdf043839ee9457e50a8e6476267693b077ea2043" exitCode=0 Dec 03 18:00:02 crc kubenswrapper[4687]: I1203 18:00:02.579965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" event={"ID":"8d6ba5c5-8f42-4aca-8548-f385332049ed","Type":"ContainerDied","Data":"185938becf33e1c70a4682ecdf043839ee9457e50a8e6476267693b077ea2043"} Dec 03 18:00:02 crc kubenswrapper[4687]: I1203 18:00:02.879629 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-schhv" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021526 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021585 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021670 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021749 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021780 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.021810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4srk\" (UniqueName: \"kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk\") pod \"67159b4a-2e66-424e-9e93-4863da0f5b56\" (UID: \"67159b4a-2e66-424e-9e93-4863da0f5b56\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.025545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.028293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts" (OuterVolumeSpecName: "scripts") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.039911 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.057666 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk" (OuterVolumeSpecName: "kube-api-access-q4srk") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "kube-api-access-q4srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.071367 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.105064 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data" (OuterVolumeSpecName: "config-data") pod "67159b4a-2e66-424e-9e93-4863da0f5b56" (UID: "67159b4a-2e66-424e-9e93-4863da0f5b56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.123741 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.123872 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/67159b4a-2e66-424e-9e93-4863da0f5b56-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.123946 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.124001 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4srk\" (UniqueName: \"kubernetes.io/projected/67159b4a-2e66-424e-9e93-4863da0f5b56-kube-api-access-q4srk\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.124630 4687 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.124718 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67159b4a-2e66-424e-9e93-4863da0f5b56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.168306 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.440544 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.597570 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-schhv" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.597616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-schhv" event={"ID":"67159b4a-2e66-424e-9e93-4863da0f5b56","Type":"ContainerDied","Data":"783d24135ca63fecab3da39dfe07599048f106bfad74d4f4f632831923796b67"} Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.597641 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783d24135ca63fecab3da39dfe07599048f106bfad74d4f4f632831923796b67" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.679566 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846041 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846395 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p5bd\" (UniqueName: \"kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846550 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846589 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.846662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb\") pod \"8d6ba5c5-8f42-4aca-8548-f385332049ed\" (UID: \"8d6ba5c5-8f42-4aca-8548-f385332049ed\") " Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.853948 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd" (OuterVolumeSpecName: "kube-api-access-5p5bd") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "kube-api-access-5p5bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.930791 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.949693 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p5bd\" (UniqueName: \"kubernetes.io/projected/8d6ba5c5-8f42-4aca-8548-f385332049ed-kube-api-access-5p5bd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.949729 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.961828 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.962258 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.963461 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config" (OuterVolumeSpecName: "config") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:03 crc kubenswrapper[4687]: I1203 18:00:03.967087 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8d6ba5c5-8f42-4aca-8548-f385332049ed" (UID: "8d6ba5c5-8f42-4aca-8548-f385332049ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.051268 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.051299 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.051310 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.051319 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8d6ba5c5-8f42-4aca-8548-f385332049ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077032 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:04 crc kubenswrapper[4687]: E1203 18:00:04.077598 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" containerName="cinder-db-sync" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077613 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" containerName="cinder-db-sync" Dec 03 18:00:04 crc kubenswrapper[4687]: E1203 18:00:04.077622 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="init" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077628 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="init" Dec 03 18:00:04 crc kubenswrapper[4687]: E1203 18:00:04.077644 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="dnsmasq-dns" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077651 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="dnsmasq-dns" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077835 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" containerName="cinder-db-sync" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.077845 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" containerName="dnsmasq-dns" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.078833 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.081604 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.083408 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.083624 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.083749 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-qd5t9" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.087321 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.150894 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.160829 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: W1203 18:00:04.184094 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef0887fc_fb17_4743_bdf2_898815992dd9.slice/crio-3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475 WatchSource:0}: Error finding container 3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475: Status 404 returned error can't find the container with id 3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.192522 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.229695 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261519 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261796 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261865 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.261901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262023 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262050 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5996\" (UniqueName: \"kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpbwz\" (UniqueName: \"kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262201 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.262251 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: E1203 18:00:04.287535 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.351862 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.353978 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.358857 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.364218 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.366508 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.366666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5996\" (UniqueName: \"kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.367232 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpbwz\" (UniqueName: \"kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.367390 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.368083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.368592 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.368843 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.368963 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.369298 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.371037 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.371181 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.369451 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.372998 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.373562 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.373877 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.374550 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.376548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.376764 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.377316 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.381733 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.387676 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.388086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.396952 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpbwz\" (UniqueName: \"kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz\") pod \"cinder-scheduler-0\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.402606 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5996\" (UniqueName: \"kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996\") pod \"dnsmasq-dns-5c9776ccc5-2d9p4\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.406066 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.475901 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.475947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.475963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.475982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.476022 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvk8\" (UniqueName: \"kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.476036 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.476129 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.487391 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591238 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591259 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591475 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvk8\" (UniqueName: \"kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.591548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.599604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.600748 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.606921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.611363 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.611604 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.612233 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.633663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvk8\" (UniqueName: \"kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8\") pod \"cinder-api-0\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.636170 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" event={"ID":"8d6ba5c5-8f42-4aca-8548-f385332049ed","Type":"ContainerDied","Data":"9a0b2ce74461dd095ca29f7d74176b09c4eb3f05ce27e283cec3f829381f3bd2"} Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.636243 4687 scope.go:117] "RemoveContainer" containerID="185938becf33e1c70a4682ecdf043839ee9457e50a8e6476267693b077ea2043" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.636405 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-crlv7" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.669498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerStarted","Data":"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9"} Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.669707 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="ceilometer-notification-agent" containerID="cri-o://590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d" gracePeriod=30 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.669915 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.670277 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="proxy-httpd" containerID="cri-o://bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9" gracePeriod=30 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.670438 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="sg-core" containerID="cri-o://63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47" gracePeriod=30 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.678549 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.707669 4687 generic.go:334] "Generic (PLEG): container finished" podID="ef0887fc-fb17-4743-bdf2-898815992dd9" containerID="764a517b4f033f44adf30ade85cf18221e64cffebe81eee6342cea2b41c49b5f" exitCode=0 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.707705 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" event={"ID":"ef0887fc-fb17-4743-bdf2-898815992dd9","Type":"ContainerDied","Data":"764a517b4f033f44adf30ade85cf18221e64cffebe81eee6342cea2b41c49b5f"} Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.707729 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" event={"ID":"ef0887fc-fb17-4743-bdf2-898815992dd9","Type":"ContainerStarted","Data":"3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475"} Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.717365 4687 scope.go:117] "RemoveContainer" containerID="7ecf5c84f3a86bf875189da02f5c815e93ccc29e6adb948808219780e2e7990f" Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.760238 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.778183 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-crlv7"] Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.924166 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:00:04 crc kubenswrapper[4687]: W1203 18:00:04.946224 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a3b5e3d_5ec7_431b_8c14_fc79b496f9b0.slice/crio-a8b9a1214a149a7e9f5bf88340ecdc2841fb782113f2493e96cdc8a37f26c221 WatchSource:0}: Error finding container a8b9a1214a149a7e9f5bf88340ecdc2841fb782113f2493e96cdc8a37f26c221: Status 404 returned error can't find the container with id a8b9a1214a149a7e9f5bf88340ecdc2841fb782113f2493e96cdc8a37f26c221 Dec 03 18:00:04 crc kubenswrapper[4687]: I1203 18:00:04.961531 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.417334 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6ba5c5-8f42-4aca-8548-f385332049ed" path="/var/lib/kubelet/pods/8d6ba5c5-8f42-4aca-8548-f385332049ed/volumes" Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.472759 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:05 crc kubenswrapper[4687]: W1203 18:00:05.475372 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c278e07_5206_4e79_a0db_9d2227e402a9.slice/crio-66db88471ab1e7b9217b6e89e89602a7625fbb16c650a9108a6526423f1cbcf3 WatchSource:0}: Error finding container 66db88471ab1e7b9217b6e89e89602a7625fbb16c650a9108a6526423f1cbcf3: Status 404 returned error can't find the container with id 66db88471ab1e7b9217b6e89e89602a7625fbb16c650a9108a6526423f1cbcf3 Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.738491 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee43441f-77ef-4fd7-a326-b173070a6060" containerID="bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9" exitCode=0 Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.738816 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee43441f-77ef-4fd7-a326-b173070a6060" containerID="63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47" exitCode=2 Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.738893 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerDied","Data":"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9"} Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.738923 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerDied","Data":"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47"} Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.773778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerStarted","Data":"66db88471ab1e7b9217b6e89e89602a7625fbb16c650a9108a6526423f1cbcf3"} Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.777589 4687 generic.go:334] "Generic (PLEG): container finished" podID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerID="ddb580e846e5a6b3e6a9eb53ca4294faa2d71ee0db2ef8ef69064654e304bd83" exitCode=0 Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.777652 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" event={"ID":"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0","Type":"ContainerDied","Data":"ddb580e846e5a6b3e6a9eb53ca4294faa2d71ee0db2ef8ef69064654e304bd83"} Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.777680 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" event={"ID":"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0","Type":"ContainerStarted","Data":"a8b9a1214a149a7e9f5bf88340ecdc2841fb782113f2493e96cdc8a37f26c221"} Dec 03 18:00:05 crc kubenswrapper[4687]: I1203 18:00:05.784214 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerStarted","Data":"93f0582fc9fa14a9b98da13863c4752d1361a1def17575dc199f8938b51f7991"} Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.053621 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66484d5554-njnbk" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.282896 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.341980 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.399845 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume\") pod \"ef0887fc-fb17-4743-bdf2-898815992dd9\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.399927 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ft55\" (UniqueName: \"kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55\") pod \"ef0887fc-fb17-4743-bdf2-898815992dd9\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.400143 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume\") pod \"ef0887fc-fb17-4743-bdf2-898815992dd9\" (UID: \"ef0887fc-fb17-4743-bdf2-898815992dd9\") " Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.401751 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume" (OuterVolumeSpecName: "config-volume") pod "ef0887fc-fb17-4743-bdf2-898815992dd9" (UID: "ef0887fc-fb17-4743-bdf2-898815992dd9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.408140 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ef0887fc-fb17-4743-bdf2-898815992dd9" (UID: "ef0887fc-fb17-4743-bdf2-898815992dd9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.414678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55" (OuterVolumeSpecName: "kube-api-access-4ft55") pod "ef0887fc-fb17-4743-bdf2-898815992dd9" (UID: "ef0887fc-fb17-4743-bdf2-898815992dd9"). InnerVolumeSpecName "kube-api-access-4ft55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.502653 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0887fc-fb17-4743-bdf2-898815992dd9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.502680 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ft55\" (UniqueName: \"kubernetes.io/projected/ef0887fc-fb17-4743-bdf2-898815992dd9-kube-api-access-4ft55\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.502691 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ef0887fc-fb17-4743-bdf2-898815992dd9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.799536 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerStarted","Data":"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6"} Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.809969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" event={"ID":"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0","Type":"ContainerStarted","Data":"1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350"} Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.811083 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.817392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" event={"ID":"ef0887fc-fb17-4743-bdf2-898815992dd9","Type":"ContainerDied","Data":"3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475"} Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.817427 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c395d7109961710ed772fded876f6b146910532dfe32855c74068971f95c475" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.817486 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.928045 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" podStartSLOduration=2.928028237 podStartE2EDuration="2.928028237s" podCreationTimestamp="2025-12-03 18:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:06.885395305 +0000 UTC m=+1239.776090738" watchObservedRunningTime="2025-12-03 18:00:06.928028237 +0000 UTC m=+1239.818723670" Dec 03 18:00:06 crc kubenswrapper[4687]: I1203 18:00:06.966381 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.007981 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f84949b66-zfm22" Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.083009 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.083336 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b9f7f69fd-rlx5z" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api" containerID="cri-o://5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d" gracePeriod=30 Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.083519 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b9f7f69fd-rlx5z" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api-log" containerID="cri-o://3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807" gracePeriod=30 Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.811200 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.827951 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerStarted","Data":"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b"} Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.828092 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api-log" containerID="cri-o://c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" gracePeriod=30 Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.828176 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.828204 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api" containerID="cri-o://891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" gracePeriod=30 Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.840069 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerStarted","Data":"b6af99c6de502a951fa5bd0b921b8ce45bee92dc0204bb6dcfbc3f1e775bdb1e"} Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.849702 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerID="3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807" exitCode=143 Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.849781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerDied","Data":"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807"} Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.852967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 18:00:07 crc kubenswrapper[4687]: I1203 18:00:07.876277 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.876256631 podStartE2EDuration="3.876256631s" podCreationTimestamp="2025-12-03 18:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:07.866704423 +0000 UTC m=+1240.757399866" watchObservedRunningTime="2025-12-03 18:00:07.876256631 +0000 UTC m=+1240.766952064" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.640178 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d44b68cb5-gzqxl" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.763274 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.763694 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66484d5554-njnbk" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-api" containerID="cri-o://003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0" gracePeriod=30 Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.764055 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66484d5554-njnbk" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-httpd" containerID="cri-o://ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b" gracePeriod=30 Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.784517 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.860878 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873147 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873241 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873328 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873682 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873762 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873783 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.873849 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2bxm\" (UniqueName: \"kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm\") pod \"ee43441f-77ef-4fd7-a326-b173070a6060\" (UID: \"ee43441f-77ef-4fd7-a326-b173070a6060\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.874855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.875102 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.892390 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts" (OuterVolumeSpecName: "scripts") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.892676 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm" (OuterVolumeSpecName: "kube-api-access-n2bxm") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "kube-api-access-n2bxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.900342 4687 generic.go:334] "Generic (PLEG): container finished" podID="ee43441f-77ef-4fd7-a326-b173070a6060" containerID="590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d" exitCode=0 Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.900414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerDied","Data":"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.900447 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee43441f-77ef-4fd7-a326-b173070a6060","Type":"ContainerDied","Data":"21bd6d9b89e665e210f87c59ccf19e982f1bc971ff870cd9f7a3c9f53d548633"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.900467 4687 scope.go:117] "RemoveContainer" containerID="bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.900522 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915360 4687 generic.go:334] "Generic (PLEG): container finished" podID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerID="891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" exitCode=0 Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915386 4687 generic.go:334] "Generic (PLEG): container finished" podID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerID="c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" exitCode=143 Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915462 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerDied","Data":"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerDied","Data":"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915498 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8c278e07-5206-4e79-a0db-9d2227e402a9","Type":"ContainerDied","Data":"66db88471ab1e7b9217b6e89e89602a7625fbb16c650a9108a6526423f1cbcf3"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.915568 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.930374 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerStarted","Data":"a5aa894c9bbdccd70848e601c4e6cae124fa13fa1f369dd6f51e728d93bb70d0"} Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.940052 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.969754 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.976460526 podStartE2EDuration="4.969736566s" podCreationTimestamp="2025-12-03 18:00:04 +0000 UTC" firstStartedPulling="2025-12-03 18:00:05.012791411 +0000 UTC m=+1237.903486844" lastFinishedPulling="2025-12-03 18:00:06.006067451 +0000 UTC m=+1238.896762884" observedRunningTime="2025-12-03 18:00:08.957542447 +0000 UTC m=+1241.848237880" watchObservedRunningTime="2025-12-03 18:00:08.969736566 +0000 UTC m=+1241.860431999" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.976969 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977198 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977582 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977685 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977799 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggvk8\" (UniqueName: \"kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977873 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.977957 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom\") pod \"8c278e07-5206-4e79-a0db-9d2227e402a9\" (UID: \"8c278e07-5206-4e79-a0db-9d2227e402a9\") " Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978419 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978483 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978536 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2bxm\" (UniqueName: \"kubernetes.io/projected/ee43441f-77ef-4fd7-a326-b173070a6060-kube-api-access-n2bxm\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978587 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee43441f-77ef-4fd7-a326-b173070a6060-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978658 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.978858 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs" (OuterVolumeSpecName: "logs") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.980289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.982246 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.988019 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts" (OuterVolumeSpecName: "scripts") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.989257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.998631 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8" (OuterVolumeSpecName: "kube-api-access-ggvk8") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "kube-api-access-ggvk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:08 crc kubenswrapper[4687]: I1203 18:00:08.998828 4687 scope.go:117] "RemoveContainer" containerID="63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.036336 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.074200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data" (OuterVolumeSpecName: "config-data") pod "ee43441f-77ef-4fd7-a326-b173070a6060" (UID: "ee43441f-77ef-4fd7-a326-b173070a6060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.081251 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data" (OuterVolumeSpecName: "config-data") pod "8c278e07-5206-4e79-a0db-9d2227e402a9" (UID: "8c278e07-5206-4e79-a0db-9d2227e402a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082438 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082471 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c278e07-5206-4e79-a0db-9d2227e402a9-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082485 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082497 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee43441f-77ef-4fd7-a326-b173070a6060-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082510 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c278e07-5206-4e79-a0db-9d2227e402a9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082521 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082532 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggvk8\" (UniqueName: \"kubernetes.io/projected/8c278e07-5206-4e79-a0db-9d2227e402a9-kube-api-access-ggvk8\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082545 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.082558 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c278e07-5206-4e79-a0db-9d2227e402a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.142740 4687 scope.go:117] "RemoveContainer" containerID="590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.174319 4687 scope.go:117] "RemoveContainer" containerID="bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.175336 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9\": container with ID starting with bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9 not found: ID does not exist" containerID="bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.175370 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9"} err="failed to get container status \"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9\": rpc error: code = NotFound desc = could not find container \"bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9\": container with ID starting with bfed63df81c6968cf06e4d67170ebecea02d759305b075dd62659b3134a934e9 not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.175391 4687 scope.go:117] "RemoveContainer" containerID="63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.175619 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47\": container with ID starting with 63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47 not found: ID does not exist" containerID="63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.175646 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47"} err="failed to get container status \"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47\": rpc error: code = NotFound desc = could not find container \"63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47\": container with ID starting with 63456dd811f08d1ab9cd2c1ab6f11bb8efa3dad1f590b23de79d09856f6c1b47 not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.175662 4687 scope.go:117] "RemoveContainer" containerID="590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.176411 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d\": container with ID starting with 590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d not found: ID does not exist" containerID="590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.176437 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d"} err="failed to get container status \"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d\": rpc error: code = NotFound desc = could not find container \"590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d\": container with ID starting with 590891fa04046e208c60856c086b5817d84b7b871c434b880a701b19e5f9644d not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.176454 4687 scope.go:117] "RemoveContainer" containerID="891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.196138 4687 scope.go:117] "RemoveContainer" containerID="c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.219705 4687 scope.go:117] "RemoveContainer" containerID="891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.220286 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b\": container with ID starting with 891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b not found: ID does not exist" containerID="891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.220390 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b"} err="failed to get container status \"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b\": rpc error: code = NotFound desc = could not find container \"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b\": container with ID starting with 891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.220442 4687 scope.go:117] "RemoveContainer" containerID="c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.221425 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6\": container with ID starting with c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6 not found: ID does not exist" containerID="c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.221460 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6"} err="failed to get container status \"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6\": rpc error: code = NotFound desc = could not find container \"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6\": container with ID starting with c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6 not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.221484 4687 scope.go:117] "RemoveContainer" containerID="891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.221780 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b"} err="failed to get container status \"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b\": rpc error: code = NotFound desc = could not find container \"891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b\": container with ID starting with 891cd20abb97bc15f7b382ec49e13eb929d9a1cb5bb761a1911891cea92ff05b not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.221803 4687 scope.go:117] "RemoveContainer" containerID="c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.222102 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6"} err="failed to get container status \"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6\": rpc error: code = NotFound desc = could not find container \"c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6\": container with ID starting with c0adc4cdbd288629016778474347a3686f388e2ce7391122df1e7a11721f40e6 not found: ID does not exist" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.278622 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.292013 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.303558 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.313220 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324414 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.324819 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="sg-core" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324838 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="sg-core" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.324868 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="proxy-httpd" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324874 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="proxy-httpd" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.324885 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324890 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.324900 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="ceilometer-notification-agent" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324906 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="ceilometer-notification-agent" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.324914 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api-log" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.324920 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api-log" Dec 03 18:00:09 crc kubenswrapper[4687]: E1203 18:00:09.325419 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0887fc-fb17-4743-bdf2-898815992dd9" containerName="collect-profiles" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325434 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0887fc-fb17-4743-bdf2-898815992dd9" containerName="collect-profiles" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325594 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api-log" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325620 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="sg-core" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325631 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="proxy-httpd" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325640 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0887fc-fb17-4743-bdf2-898815992dd9" containerName="collect-profiles" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325648 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" containerName="ceilometer-notification-agent" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.325656 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" containerName="cinder-api" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.327852 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.330959 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.331564 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.332855 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.334939 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.341383 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.345069 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.345267 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.345336 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.347010 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.409332 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.426084 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c278e07-5206-4e79-a0db-9d2227e402a9" path="/var/lib/kubelet/pods/8c278e07-5206-4e79-a0db-9d2227e402a9/volumes" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.427056 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee43441f-77ef-4fd7-a326-b173070a6060" path="/var/lib/kubelet/pods/ee43441f-77ef-4fd7-a326-b173070a6060/volumes" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490588 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4766b79-a447-4290-bbe9-dc10a59ced40-logs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490632 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvm9b\" (UniqueName: \"kubernetes.io/projected/e4766b79-a447-4290-bbe9-dc10a59ced40-kube-api-access-pvm9b\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490669 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-scripts\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490691 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490710 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4766b79-a447-4290-bbe9-dc10a59ced40-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490764 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490789 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490809 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490875 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490957 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.490986 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mlrg\" (UniqueName: \"kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.491007 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.491027 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.491058 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592213 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592260 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592315 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4766b79-a447-4290-bbe9-dc10a59ced40-logs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvm9b\" (UniqueName: \"kubernetes.io/projected/e4766b79-a447-4290-bbe9-dc10a59ced40-kube-api-access-pvm9b\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592395 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-scripts\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592412 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4766b79-a447-4290-bbe9-dc10a59ced40-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592514 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592554 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592572 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mlrg\" (UniqueName: \"kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592611 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592626 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.592714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.593508 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e4766b79-a447-4290-bbe9-dc10a59ced40-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.593796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.593871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e4766b79-a447-4290-bbe9-dc10a59ced40-logs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.599201 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.599938 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.600708 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-scripts\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.602212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.602786 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.603826 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.604501 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.608028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.608932 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4766b79-a447-4290-bbe9-dc10a59ced40-config-data-custom\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.613497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mlrg\" (UniqueName: \"kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.616021 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvm9b\" (UniqueName: \"kubernetes.io/projected/e4766b79-a447-4290-bbe9-dc10a59ced40-kube-api-access-pvm9b\") pod \"cinder-api-0\" (UID: \"e4766b79-a447-4290-bbe9-dc10a59ced40\") " pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.620998 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data\") pod \"ceilometer-0\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.676865 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.698600 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.897403 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6968cc7b7b-57qh6" Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.967089 4687 generic.go:334] "Generic (PLEG): container finished" podID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerID="ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b" exitCode=0 Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.967178 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerDied","Data":"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b"} Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.989973 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.990255 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon-log" containerID="cri-o://e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824" gracePeriod=30 Dec 03 18:00:09 crc kubenswrapper[4687]: I1203 18:00:09.990892 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" containerID="cri-o://4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd" gracePeriod=30 Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.002782 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.178216 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.289457 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 03 18:00:10 crc kubenswrapper[4687]: W1203 18:00:10.302001 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4766b79_a447_4290_bbe9_dc10a59ced40.slice/crio-c7adcf7214ab6a8296943d2779f65d678978952920395ec6a68205d570ddd477 WatchSource:0}: Error finding container c7adcf7214ab6a8296943d2779f65d678978952920395ec6a68205d570ddd477: Status 404 returned error can't find the container with id c7adcf7214ab6a8296943d2779f65d678978952920395ec6a68205d570ddd477 Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.661838 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.820387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom\") pod \"4ee7b1f1-9288-49f2-948f-4635d6676e64\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.820965 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z55pm\" (UniqueName: \"kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm\") pod \"4ee7b1f1-9288-49f2-948f-4635d6676e64\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.821033 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data\") pod \"4ee7b1f1-9288-49f2-948f-4635d6676e64\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.821137 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs\") pod \"4ee7b1f1-9288-49f2-948f-4635d6676e64\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.821172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle\") pod \"4ee7b1f1-9288-49f2-948f-4635d6676e64\" (UID: \"4ee7b1f1-9288-49f2-948f-4635d6676e64\") " Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.822452 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs" (OuterVolumeSpecName: "logs") pod "4ee7b1f1-9288-49f2-948f-4635d6676e64" (UID: "4ee7b1f1-9288-49f2-948f-4635d6676e64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.825405 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ee7b1f1-9288-49f2-948f-4635d6676e64" (UID: "4ee7b1f1-9288-49f2-948f-4635d6676e64"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.832381 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm" (OuterVolumeSpecName: "kube-api-access-z55pm") pod "4ee7b1f1-9288-49f2-948f-4635d6676e64" (UID: "4ee7b1f1-9288-49f2-948f-4635d6676e64"). InnerVolumeSpecName "kube-api-access-z55pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.869046 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ee7b1f1-9288-49f2-948f-4635d6676e64" (UID: "4ee7b1f1-9288-49f2-948f-4635d6676e64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.875164 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data" (OuterVolumeSpecName: "config-data") pod "4ee7b1f1-9288-49f2-948f-4635d6676e64" (UID: "4ee7b1f1-9288-49f2-948f-4635d6676e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.923764 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z55pm\" (UniqueName: \"kubernetes.io/projected/4ee7b1f1-9288-49f2-948f-4635d6676e64-kube-api-access-z55pm\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.923799 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.923810 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ee7b1f1-9288-49f2-948f-4635d6676e64-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.923819 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.923827 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ee7b1f1-9288-49f2-948f-4635d6676e64-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.983334 4687 generic.go:334] "Generic (PLEG): container finished" podID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerID="5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d" exitCode=0 Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.983393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerDied","Data":"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d"} Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.983419 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b9f7f69fd-rlx5z" event={"ID":"4ee7b1f1-9288-49f2-948f-4635d6676e64","Type":"ContainerDied","Data":"c31dffce5e35de304469049efd8857aef9b8b8fe107b182b9a2c935be6dcce0b"} Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.983437 4687 scope.go:117] "RemoveContainer" containerID="5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.983529 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b9f7f69fd-rlx5z" Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.991522 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4766b79-a447-4290-bbe9-dc10a59ced40","Type":"ContainerStarted","Data":"5510f6dc7fa8ed7ca297ec68d4506a2e10f52f335f5f90f35f5987cb31def528"} Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.991576 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4766b79-a447-4290-bbe9-dc10a59ced40","Type":"ContainerStarted","Data":"c7adcf7214ab6a8296943d2779f65d678978952920395ec6a68205d570ddd477"} Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.997263 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerStarted","Data":"9bc9e5245bfc3cdf03fccd5bb88deb95628699821bf2462df1a73d9037afdf9b"} Dec 03 18:00:10 crc kubenswrapper[4687]: I1203 18:00:10.997326 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerStarted","Data":"066a7e26933d89eb170c9a5fbf369df7372e690e67548f0135b709a5b4ace105"} Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.120819 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.139545 4687 scope.go:117] "RemoveContainer" containerID="3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807" Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.149585 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b9f7f69fd-rlx5z"] Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.161727 4687 scope.go:117] "RemoveContainer" containerID="5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d" Dec 03 18:00:11 crc kubenswrapper[4687]: E1203 18:00:11.162320 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d\": container with ID starting with 5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d not found: ID does not exist" containerID="5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d" Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.162357 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d"} err="failed to get container status \"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d\": rpc error: code = NotFound desc = could not find container \"5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d\": container with ID starting with 5a6df9851b56015ddcfa8b0ffa7a813b84d1bcd5a138ec1896376b40d462e84d not found: ID does not exist" Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.162385 4687 scope.go:117] "RemoveContainer" containerID="3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807" Dec 03 18:00:11 crc kubenswrapper[4687]: E1203 18:00:11.162709 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807\": container with ID starting with 3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807 not found: ID does not exist" containerID="3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807" Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.162800 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807"} err="failed to get container status \"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807\": rpc error: code = NotFound desc = could not find container \"3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807\": container with ID starting with 3d45a38ec007a3e3b1b8b92726f3186516fa5f27601467f519ba8c4e48cb4807 not found: ID does not exist" Dec 03 18:00:11 crc kubenswrapper[4687]: I1203 18:00:11.420813 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" path="/var/lib/kubelet/pods/4ee7b1f1-9288-49f2-948f-4635d6676e64/volumes" Dec 03 18:00:12 crc kubenswrapper[4687]: I1203 18:00:12.009724 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e4766b79-a447-4290-bbe9-dc10a59ced40","Type":"ContainerStarted","Data":"ded026f5851fda8192a771f1f3ca199eb6bfb782e165c79e3f3263a42ab19297"} Dec 03 18:00:12 crc kubenswrapper[4687]: I1203 18:00:12.011256 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 03 18:00:12 crc kubenswrapper[4687]: I1203 18:00:12.014684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerStarted","Data":"62b5a1185186e458ac90f91a86a2545f20c634027066d6ad4b4e33e6318a3c13"} Dec 03 18:00:12 crc kubenswrapper[4687]: I1203 18:00:12.036416 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.036394531 podStartE2EDuration="3.036394531s" podCreationTimestamp="2025-12-03 18:00:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:12.031697535 +0000 UTC m=+1244.922392968" watchObservedRunningTime="2025-12-03 18:00:12.036394531 +0000 UTC m=+1244.927089964" Dec 03 18:00:13 crc kubenswrapper[4687]: I1203 18:00:13.031101 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerStarted","Data":"3d29f3d3ebdfd8e7f6491468bfca9cf5d0d04582ece5a962c950e0771000cd6f"} Dec 03 18:00:13 crc kubenswrapper[4687]: I1203 18:00:13.405791 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:45292->10.217.0.148:8443: read: connection reset by peer" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.040167 4687 generic.go:334] "Generic (PLEG): container finished" podID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerID="4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd" exitCode=0 Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.040221 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerDied","Data":"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd"} Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.043237 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerStarted","Data":"83f58f1b3c9470627708f8ba563f3afdbcf7ad68f58b1d224ede175ec2ee17a7"} Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.069967 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.126407939 podStartE2EDuration="5.069944431s" podCreationTimestamp="2025-12-03 18:00:09 +0000 UTC" firstStartedPulling="2025-12-03 18:00:10.184015832 +0000 UTC m=+1243.074711265" lastFinishedPulling="2025-12-03 18:00:13.127552324 +0000 UTC m=+1246.018247757" observedRunningTime="2025-12-03 18:00:14.064873614 +0000 UTC m=+1246.955569057" watchObservedRunningTime="2025-12-03 18:00:14.069944431 +0000 UTC m=+1246.960639864" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.111419 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.111497 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.396805 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.488788 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.581465 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.581764 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="dnsmasq-dns" containerID="cri-o://865fd9ff1920180d3d289d4c7e36855a67c7de21ae657bc8effadaf3d2ad612b" gracePeriod=10 Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.681552 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 18:00:14 crc kubenswrapper[4687]: I1203 18:00:14.734074 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.070505 4687 generic.go:334] "Generic (PLEG): container finished" podID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerID="865fd9ff1920180d3d289d4c7e36855a67c7de21ae657bc8effadaf3d2ad612b" exitCode=0 Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.070561 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" event={"ID":"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5","Type":"ContainerDied","Data":"865fd9ff1920180d3d289d4c7e36855a67c7de21ae657bc8effadaf3d2ad612b"} Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.070772 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="cinder-scheduler" containerID="cri-o://b6af99c6de502a951fa5bd0b921b8ce45bee92dc0204bb6dcfbc3f1e775bdb1e" gracePeriod=30 Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.071271 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="probe" containerID="cri-o://a5aa894c9bbdccd70848e601c4e6cae124fa13fa1f369dd6f51e728d93bb70d0" gracePeriod=30 Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.072169 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.174069 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267443 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267518 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267569 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srqd5\" (UniqueName: \"kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267632 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.267726 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc\") pod \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\" (UID: \"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5\") " Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.273114 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5" (OuterVolumeSpecName: "kube-api-access-srqd5") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "kube-api-access-srqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.321300 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.325427 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.331277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.347589 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config" (OuterVolumeSpecName: "config") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.359421 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" (UID: "4a9d3b38-1c8e-4946-a25d-22d8428ee1c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.369839 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.370158 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srqd5\" (UniqueName: \"kubernetes.io/projected/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-kube-api-access-srqd5\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.370280 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.370551 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.370662 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.370817 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:15 crc kubenswrapper[4687]: E1203 18:00:15.656819 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod058e41aa_d6d6_43a8_a98a_3ba0433acbd5.slice/crio-003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9d3b38_1c8e_4946_a25d_22d8428ee1c5.slice\": RecentStats: unable to find data in memory cache]" Dec 03 18:00:15 crc kubenswrapper[4687]: I1203 18:00:15.980965 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66484d5554-njnbk" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.082094 4687 generic.go:334] "Generic (PLEG): container finished" podID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerID="003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0" exitCode=0 Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.082150 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66484d5554-njnbk" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.082222 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerDied","Data":"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0"} Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.082257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66484d5554-njnbk" event={"ID":"058e41aa-d6d6-43a8-a98a-3ba0433acbd5","Type":"ContainerDied","Data":"7fbba7c8e87d11e91aef35e41870b832abec4290ca3e2e71dc8ceac52b328284"} Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.082275 4687 scope.go:117] "RemoveContainer" containerID="ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.083294 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle\") pod \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.083326 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config\") pod \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.083379 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config\") pod \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.083468 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hhjg\" (UniqueName: \"kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg\") pod \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.083515 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs\") pod \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\" (UID: \"058e41aa-d6d6-43a8-a98a-3ba0433acbd5\") " Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.089367 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.089366 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-w5ps9" event={"ID":"4a9d3b38-1c8e-4946-a25d-22d8428ee1c5","Type":"ContainerDied","Data":"905051cc2ca3cfd50c50aa5f820d0b0d495e5a61c0a199a4cba92b8fb6ebc069"} Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.089436 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "058e41aa-d6d6-43a8-a98a-3ba0433acbd5" (UID: "058e41aa-d6d6-43a8-a98a-3ba0433acbd5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.093979 4687 generic.go:334] "Generic (PLEG): container finished" podID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerID="a5aa894c9bbdccd70848e601c4e6cae124fa13fa1f369dd6f51e728d93bb70d0" exitCode=0 Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.094065 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerDied","Data":"a5aa894c9bbdccd70848e601c4e6cae124fa13fa1f369dd6f51e728d93bb70d0"} Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.098305 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg" (OuterVolumeSpecName: "kube-api-access-8hhjg") pod "058e41aa-d6d6-43a8-a98a-3ba0433acbd5" (UID: "058e41aa-d6d6-43a8-a98a-3ba0433acbd5"). InnerVolumeSpecName "kube-api-access-8hhjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.160369 4687 scope.go:117] "RemoveContainer" containerID="003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.174925 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.175240 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config" (OuterVolumeSpecName: "config") pod "058e41aa-d6d6-43a8-a98a-3ba0433acbd5" (UID: "058e41aa-d6d6-43a8-a98a-3ba0433acbd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.175286 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "058e41aa-d6d6-43a8-a98a-3ba0433acbd5" (UID: "058e41aa-d6d6-43a8-a98a-3ba0433acbd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.182976 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-w5ps9"] Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.185813 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.185837 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.185846 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.185856 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hhjg\" (UniqueName: \"kubernetes.io/projected/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-kube-api-access-8hhjg\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.192441 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "058e41aa-d6d6-43a8-a98a-3ba0433acbd5" (UID: "058e41aa-d6d6-43a8-a98a-3ba0433acbd5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.203139 4687 scope.go:117] "RemoveContainer" containerID="ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b" Dec 03 18:00:16 crc kubenswrapper[4687]: E1203 18:00:16.205532 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b\": container with ID starting with ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b not found: ID does not exist" containerID="ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.205579 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b"} err="failed to get container status \"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b\": rpc error: code = NotFound desc = could not find container \"ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b\": container with ID starting with ce77017095ee9d39828d6a6e8a94706213b891cb4664e96abc0ae6b7abb77b1b not found: ID does not exist" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.205607 4687 scope.go:117] "RemoveContainer" containerID="003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0" Dec 03 18:00:16 crc kubenswrapper[4687]: E1203 18:00:16.211603 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0\": container with ID starting with 003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0 not found: ID does not exist" containerID="003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.211637 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0"} err="failed to get container status \"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0\": rpc error: code = NotFound desc = could not find container \"003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0\": container with ID starting with 003e5a9263f0e81d20a9655754dfff3de3df2161576b8f0833fe5bfdf41300b0 not found: ID does not exist" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.211659 4687 scope.go:117] "RemoveContainer" containerID="865fd9ff1920180d3d289d4c7e36855a67c7de21ae657bc8effadaf3d2ad612b" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.230405 4687 scope.go:117] "RemoveContainer" containerID="3b145bfceccb9765d693e1189d95d66789c3cc80d37616dcd3f111714ef11dd1" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.288708 4687 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/058e41aa-d6d6-43a8-a98a-3ba0433acbd5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.420187 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 18:00:16 crc kubenswrapper[4687]: I1203 18:00:16.427426 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66484d5554-njnbk"] Dec 03 18:00:17 crc kubenswrapper[4687]: I1203 18:00:17.417978 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" path="/var/lib/kubelet/pods/058e41aa-d6d6-43a8-a98a-3ba0433acbd5/volumes" Dec 03 18:00:17 crc kubenswrapper[4687]: I1203 18:00:17.418673 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" path="/var/lib/kubelet/pods/4a9d3b38-1c8e-4946-a25d-22d8428ee1c5/volumes" Dec 03 18:00:18 crc kubenswrapper[4687]: I1203 18:00:18.293725 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-699567968b-hhzfv" Dec 03 18:00:18 crc kubenswrapper[4687]: I1203 18:00:18.294893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-699567968b-hhzfv" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.134166 4687 generic.go:334] "Generic (PLEG): container finished" podID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerID="b6af99c6de502a951fa5bd0b921b8ce45bee92dc0204bb6dcfbc3f1e775bdb1e" exitCode=0 Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.134240 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerDied","Data":"b6af99c6de502a951fa5bd0b921b8ce45bee92dc0204bb6dcfbc3f1e775bdb1e"} Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.134743 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c58314c5-5c58-4b17-b039-2f7af7bb4f60","Type":"ContainerDied","Data":"93f0582fc9fa14a9b98da13863c4752d1361a1def17575dc199f8938b51f7991"} Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.134761 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f0582fc9fa14a9b98da13863c4752d1361a1def17575dc199f8938b51f7991" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.200930 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.390776 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.390846 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpbwz\" (UniqueName: \"kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.390984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.391043 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.391104 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.391163 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle\") pod \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\" (UID: \"c58314c5-5c58-4b17-b039-2f7af7bb4f60\") " Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.393013 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.397333 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.397684 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts" (OuterVolumeSpecName: "scripts") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.411722 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz" (OuterVolumeSpecName: "kube-api-access-wpbwz") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "kube-api-access-wpbwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.456758 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.493174 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpbwz\" (UniqueName: \"kubernetes.io/projected/c58314c5-5c58-4b17-b039-2f7af7bb4f60-kube-api-access-wpbwz\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.493209 4687 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c58314c5-5c58-4b17-b039-2f7af7bb4f60-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.493218 4687 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.493226 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.493235 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.503382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data" (OuterVolumeSpecName: "config-data") pod "c58314c5-5c58-4b17-b039-2f7af7bb4f60" (UID: "c58314c5-5c58-4b17-b039-2f7af7bb4f60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:20 crc kubenswrapper[4687]: I1203 18:00:20.594683 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c58314c5-5c58-4b17-b039-2f7af7bb4f60-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.143924 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.196331 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.209735 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222214 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222679 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="dnsmasq-dns" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222715 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="dnsmasq-dns" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222746 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api-log" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222755 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api-log" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222781 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="cinder-scheduler" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222790 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="cinder-scheduler" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222806 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-api" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222815 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-api" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222837 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="probe" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222846 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="probe" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222947 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-httpd" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222971 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-httpd" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.222984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="init" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.222993 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="init" Dec 03 18:00:21 crc kubenswrapper[4687]: E1203 18:00:21.223010 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223018 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223250 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="cinder-scheduler" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223277 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-httpd" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223295 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223307 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="058e41aa-d6d6-43a8-a98a-3ba0433acbd5" containerName="neutron-api" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223327 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee7b1f1-9288-49f2-948f-4635d6676e64" containerName="barbican-api-log" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223341 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9d3b38-1c8e-4946-a25d-22d8428ee1c5" containerName="dnsmasq-dns" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.223364 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" containerName="probe" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.224654 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.231101 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.245402 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307587 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307704 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05480209-7592-4ddf-a2d9-f06d4dce2c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307730 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clt59\" (UniqueName: \"kubernetes.io/projected/05480209-7592-4ddf-a2d9-f06d4dce2c75-kube-api-access-clt59\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307796 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.307852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409620 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05480209-7592-4ddf-a2d9-f06d4dce2c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409680 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409726 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clt59\" (UniqueName: \"kubernetes.io/projected/05480209-7592-4ddf-a2d9-f06d4dce2c75-kube-api-access-clt59\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409732 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05480209-7592-4ddf-a2d9-f06d4dce2c75-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409817 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409852 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.409927 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.414516 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-scripts\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.415259 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.415570 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-config-data\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.416396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05480209-7592-4ddf-a2d9-f06d4dce2c75-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.432436 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58314c5-5c58-4b17-b039-2f7af7bb4f60" path="/var/lib/kubelet/pods/c58314c5-5c58-4b17-b039-2f7af7bb4f60/volumes" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.442074 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clt59\" (UniqueName: \"kubernetes.io/projected/05480209-7592-4ddf-a2d9-f06d4dce2c75-kube-api-access-clt59\") pod \"cinder-scheduler-0\" (UID: \"05480209-7592-4ddf-a2d9-f06d4dce2c75\") " pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.549047 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 03 18:00:21 crc kubenswrapper[4687]: I1203 18:00:21.841275 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 03 18:00:22 crc kubenswrapper[4687]: I1203 18:00:22.037384 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 03 18:00:22 crc kubenswrapper[4687]: I1203 18:00:22.158215 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05480209-7592-4ddf-a2d9-f06d4dce2c75","Type":"ContainerStarted","Data":"c7d321ed307cc9d6faa22722aed70d0e82077d96f1a354639ca7909a9427328b"} Dec 03 18:00:22 crc kubenswrapper[4687]: I1203 18:00:22.935407 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7fc787b46b-k9z8g" Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.176069 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05480209-7592-4ddf-a2d9-f06d4dce2c75","Type":"ContainerStarted","Data":"c3f3107efa4b29f8fcbbff81776b887aaf5104ba2f2dda118ea54fc8720439e7"} Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.943433 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.944877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.952287 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.952313 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.958116 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8mzfq" Dec 03 18:00:23 crc kubenswrapper[4687]: I1203 18:00:23.966181 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.005995 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.006190 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.006225 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klwrn\" (UniqueName: \"kubernetes.io/projected/b2bf6226-8105-471c-8098-0786e52ab01d-kube-api-access-klwrn\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.006272 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.108452 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.108530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klwrn\" (UniqueName: \"kubernetes.io/projected/b2bf6226-8105-471c-8098-0786e52ab01d-kube-api-access-klwrn\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.108591 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.108704 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.110261 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.128310 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.128817 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klwrn\" (UniqueName: \"kubernetes.io/projected/b2bf6226-8105-471c-8098-0786e52ab01d-kube-api-access-klwrn\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.131512 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b2bf6226-8105-471c-8098-0786e52ab01d-openstack-config-secret\") pod \"openstackclient\" (UID: \"b2bf6226-8105-471c-8098-0786e52ab01d\") " pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.197761 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"05480209-7592-4ddf-a2d9-f06d4dce2c75","Type":"ContainerStarted","Data":"c05afa5d47d1efcaa00fc9037abbf200a2f9ba7cf79dbc144156a888fc282778"} Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.225253 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.225233262 podStartE2EDuration="3.225233262s" podCreationTimestamp="2025-12-03 18:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:24.222921649 +0000 UTC m=+1257.113617082" watchObservedRunningTime="2025-12-03 18:00:24.225233262 +0000 UTC m=+1257.115928695" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.273516 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.396084 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 03 18:00:24 crc kubenswrapper[4687]: I1203 18:00:24.749447 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 03 18:00:25 crc kubenswrapper[4687]: I1203 18:00:25.206450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b2bf6226-8105-471c-8098-0786e52ab01d","Type":"ContainerStarted","Data":"74a9ff244f67d3b8286007a3475ccd5016552bcbb8a686bb75ccde54c0549238"} Dec 03 18:00:26 crc kubenswrapper[4687]: I1203 18:00:26.549387 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.005083 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7bd478575-t6xjs"] Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.007445 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.027250 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7bd478575-t6xjs"] Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.027642 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.028021 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.028222 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.193349 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7jt\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-kube-api-access-wl7jt\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.193671 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-run-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.193850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-internal-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.193915 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-config-data\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.193946 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-public-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.194004 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-combined-ca-bundle\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.194144 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-log-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.194240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-etc-swift\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.295833 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-run-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.295909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-internal-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.295946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-config-data\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.295974 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-public-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296016 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-combined-ca-bundle\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296073 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-log-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-etc-swift\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296222 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7jt\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-kube-api-access-wl7jt\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296663 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-run-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.296890 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70063881-c779-4ed9-9258-a175b3ee15f4-log-httpd\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.304979 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-internal-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.305085 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-public-tls-certs\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.307251 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-etc-swift\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.313193 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-combined-ca-bundle\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.317945 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7jt\" (UniqueName: \"kubernetes.io/projected/70063881-c779-4ed9-9258-a175b3ee15f4-kube-api-access-wl7jt\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.319995 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70063881-c779-4ed9-9258-a175b3ee15f4-config-data\") pod \"swift-proxy-7bd478575-t6xjs\" (UID: \"70063881-c779-4ed9-9258-a175b3ee15f4\") " pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.379686 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.855806 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.856457 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-central-agent" containerID="cri-o://9bc9e5245bfc3cdf03fccd5bb88deb95628699821bf2462df1a73d9037afdf9b" gracePeriod=30 Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.857234 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="proxy-httpd" containerID="cri-o://83f58f1b3c9470627708f8ba563f3afdbcf7ad68f58b1d224ede175ec2ee17a7" gracePeriod=30 Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.857297 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="sg-core" containerID="cri-o://3d29f3d3ebdfd8e7f6491468bfca9cf5d0d04582ece5a962c950e0771000cd6f" gracePeriod=30 Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.857349 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-notification-agent" containerID="cri-o://62b5a1185186e458ac90f91a86a2545f20c634027066d6ad4b4e33e6318a3c13" gracePeriod=30 Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.868477 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.167:3000/\": EOF" Dec 03 18:00:28 crc kubenswrapper[4687]: I1203 18:00:28.986663 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7bd478575-t6xjs"] Dec 03 18:00:29 crc kubenswrapper[4687]: I1203 18:00:29.263555 4687 generic.go:334] "Generic (PLEG): container finished" podID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerID="83f58f1b3c9470627708f8ba563f3afdbcf7ad68f58b1d224ede175ec2ee17a7" exitCode=0 Dec 03 18:00:29 crc kubenswrapper[4687]: I1203 18:00:29.263785 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerDied","Data":"83f58f1b3c9470627708f8ba563f3afdbcf7ad68f58b1d224ede175ec2ee17a7"} Dec 03 18:00:29 crc kubenswrapper[4687]: I1203 18:00:29.263847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerDied","Data":"3d29f3d3ebdfd8e7f6491468bfca9cf5d0d04582ece5a962c950e0771000cd6f"} Dec 03 18:00:29 crc kubenswrapper[4687]: I1203 18:00:29.263803 4687 generic.go:334] "Generic (PLEG): container finished" podID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerID="3d29f3d3ebdfd8e7f6491468bfca9cf5d0d04582ece5a962c950e0771000cd6f" exitCode=2 Dec 03 18:00:29 crc kubenswrapper[4687]: I1203 18:00:29.266795 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd478575-t6xjs" event={"ID":"70063881-c779-4ed9-9258-a175b3ee15f4","Type":"ContainerStarted","Data":"cd77a6498b226b9079e2dacd13f3913bd0ffb6a202a22773860995b6a6e4b48d"} Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.285423 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd478575-t6xjs" event={"ID":"70063881-c779-4ed9-9258-a175b3ee15f4","Type":"ContainerStarted","Data":"92b0986d2c4edf7ed1a944b59895554aa50aa151d7c0079194be3e46b8254d65"} Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.292064 4687 generic.go:334] "Generic (PLEG): container finished" podID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerID="9bc9e5245bfc3cdf03fccd5bb88deb95628699821bf2462df1a73d9037afdf9b" exitCode=0 Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.292138 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerDied","Data":"9bc9e5245bfc3cdf03fccd5bb88deb95628699821bf2462df1a73d9037afdf9b"} Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.295638 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.295918 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-log" containerID="cri-o://c9b6710490130851c7c0c4cd38651ab01ce2c4618dc004e0ec2c0ec17b932425" gracePeriod=30 Dec 03 18:00:30 crc kubenswrapper[4687]: I1203 18:00:30.296013 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-httpd" containerID="cri-o://be7d3f13d113d001caffcacb29157eb5808f0aa43792c298dc3540709114c41f" gracePeriod=30 Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.030180 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.030419 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-log" containerID="cri-o://b8b8e9567bb8052c7af385d033a0bd56025ab703e11a800592b0aa5a4ea127c3" gracePeriod=30 Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.030511 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-httpd" containerID="cri-o://438523e3d2999130dea41de7ac0d605343b6151204268e021bf10fa5e804885a" gracePeriod=30 Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.308480 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerID="b8b8e9567bb8052c7af385d033a0bd56025ab703e11a800592b0aa5a4ea127c3" exitCode=143 Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.308749 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerDied","Data":"b8b8e9567bb8052c7af385d033a0bd56025ab703e11a800592b0aa5a4ea127c3"} Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.311315 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerID="c9b6710490130851c7c0c4cd38651ab01ce2c4618dc004e0ec2c0ec17b932425" exitCode=143 Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.311346 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerDied","Data":"c9b6710490130851c7c0c4cd38651ab01ce2c4618dc004e0ec2c0ec17b932425"} Dec 03 18:00:31 crc kubenswrapper[4687]: I1203 18:00:31.789661 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 03 18:00:33 crc kubenswrapper[4687]: I1203 18:00:33.353533 4687 generic.go:334] "Generic (PLEG): container finished" podID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerID="62b5a1185186e458ac90f91a86a2545f20c634027066d6ad4b4e33e6318a3c13" exitCode=0 Dec 03 18:00:33 crc kubenswrapper[4687]: I1203 18:00:33.353610 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerDied","Data":"62b5a1185186e458ac90f91a86a2545f20c634027066d6ad4b4e33e6318a3c13"} Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.300048 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-jnf9v"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.301559 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.309584 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jnf9v"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.372352 4687 generic.go:334] "Generic (PLEG): container finished" podID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerID="438523e3d2999130dea41de7ac0d605343b6151204268e021bf10fa5e804885a" exitCode=0 Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.372414 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerDied","Data":"438523e3d2999130dea41de7ac0d605343b6151204268e021bf10fa5e804885a"} Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.374631 4687 generic.go:334] "Generic (PLEG): container finished" podID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerID="be7d3f13d113d001caffcacb29157eb5808f0aa43792c298dc3540709114c41f" exitCode=0 Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.374654 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerDied","Data":"be7d3f13d113d001caffcacb29157eb5808f0aa43792c298dc3540709114c41f"} Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.398865 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-58975c669d-5qj7w" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.400999 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jtncf"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.403648 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.410650 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jtncf"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.447091 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.447161 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984f5\" (UniqueName: \"kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.507481 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7j48z"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.509771 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.536327 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7j48z"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.549351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jk7w\" (UniqueName: \"kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.549395 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.549444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984f5\" (UniqueName: \"kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.549508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.550143 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.556291 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2750-account-create-update-rhllc"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.558017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.559821 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.593556 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984f5\" (UniqueName: \"kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5\") pod \"nova-api-db-create-jnf9v\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.615899 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2750-account-create-update-rhllc"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.626764 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.652911 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.652978 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqsm\" (UniqueName: \"kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.653068 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jk7w\" (UniqueName: \"kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.653169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.653768 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.670537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jk7w\" (UniqueName: \"kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w\") pod \"nova-cell0-db-create-jtncf\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.720527 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.726471 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0b6f-account-create-update-fk9cq"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.728716 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.731446 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.735007 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b6f-account-create-update-fk9cq"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.754334 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.754382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.754411 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncqsm\" (UniqueName: \"kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.754457 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b99f\" (UniqueName: \"kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.755219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.774669 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncqsm\" (UniqueName: \"kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm\") pod \"nova-cell1-db-create-7j48z\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.835257 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.856453 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.856537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b99f\" (UniqueName: \"kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.856592 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7jd\" (UniqueName: \"kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.856616 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.857360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.886325 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b99f\" (UniqueName: \"kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f\") pod \"nova-api-2750-account-create-update-rhllc\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.929943 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-51b4-account-create-update-8jd4x"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.931193 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.933312 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.949969 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-51b4-account-create-update-8jd4x"] Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.952656 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.958041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7jd\" (UniqueName: \"kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.958086 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.958802 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.975710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7jd\" (UniqueName: \"kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd\") pod \"nova-cell0-0b6f-account-create-update-fk9cq\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.977512 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Dec 03 18:00:34 crc kubenswrapper[4687]: I1203 18:00:34.977519 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": dial tcp 10.217.0.151:9292: connect: connection refused" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.051044 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.060202 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.060369 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmsq\" (UniqueName: \"kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.161449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.161762 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmsq\" (UniqueName: \"kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.170440 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.192323 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmsq\" (UniqueName: \"kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq\") pod \"nova-cell1-51b4-account-create-update-8jd4x\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:35 crc kubenswrapper[4687]: I1203 18:00:35.250534 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.363480 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.429815 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b2bf6226-8105-471c-8098-0786e52ab01d","Type":"ContainerStarted","Data":"fe93d5b32c0763982e2d2fa88352262efecc9d82d908cd9ce4da240f10cec127"} Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.461267 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.461579 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.471557 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b","Type":"ContainerDied","Data":"066a7e26933d89eb170c9a5fbf369df7372e690e67548f0135b709a5b4ace105"} Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.471597 4687 scope.go:117] "RemoveContainer" containerID="83f58f1b3c9470627708f8ba563f3afdbcf7ad68f58b1d224ede175ec2ee17a7" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.471707 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492413 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492525 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492581 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mlrg\" (UniqueName: \"kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492751 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492809 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.492842 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd\") pod \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\" (UID: \"c61d3128-48a6-4b81-a02e-e69a7bfd1b6b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.502109 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.262414069 podStartE2EDuration="13.502090469s" podCreationTimestamp="2025-12-03 18:00:23 +0000 UTC" firstStartedPulling="2025-12-03 18:00:24.756007219 +0000 UTC m=+1257.646702652" lastFinishedPulling="2025-12-03 18:00:35.995683619 +0000 UTC m=+1268.886379052" observedRunningTime="2025-12-03 18:00:36.473947419 +0000 UTC m=+1269.364642852" watchObservedRunningTime="2025-12-03 18:00:36.502090469 +0000 UTC m=+1269.392785902" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.508724 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.508965 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.536054 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg" (OuterVolumeSpecName: "kube-api-access-6mlrg") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "kube-api-access-6mlrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.563382 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts" (OuterVolumeSpecName: "scripts") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.569101 4687 scope.go:117] "RemoveContainer" containerID="3d29f3d3ebdfd8e7f6491468bfca9cf5d0d04582ece5a962c950e0771000cd6f" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.592559 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7bd478575-t6xjs" podStartSLOduration=9.592542291 podStartE2EDuration="9.592542291s" podCreationTimestamp="2025-12-03 18:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:36.564102762 +0000 UTC m=+1269.454798225" watchObservedRunningTime="2025-12-03 18:00:36.592542291 +0000 UTC m=+1269.483237724" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.616359 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.616391 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mlrg\" (UniqueName: \"kubernetes.io/projected/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-kube-api-access-6mlrg\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.616401 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.616410 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.657394 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.733739 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.759238 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7bd478575-t6xjs" podUID="70063881-c779-4ed9-9258-a175b3ee15f4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.771086 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.777864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data" (OuterVolumeSpecName: "config-data") pod "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" (UID: "c61d3128-48a6-4b81-a02e-e69a7bfd1b6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.836364 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.836406 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.844173 4687 scope.go:117] "RemoveContainer" containerID="62b5a1185186e458ac90f91a86a2545f20c634027066d6ad4b4e33e6318a3c13" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.844850 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.862957 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.867187 4687 scope.go:117] "RemoveContainer" containerID="9bc9e5245bfc3cdf03fccd5bb88deb95628699821bf2462df1a73d9037afdf9b" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.937981 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938177 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938221 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938285 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnt4c\" (UniqueName: \"kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938375 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.938433 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run\") pod \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\" (UID: \"b1b60fd3-9d07-4696-8ccf-540ce446eb7b\") " Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.939440 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.940338 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs" (OuterVolumeSpecName: "logs") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.944891 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts" (OuterVolumeSpecName: "scripts") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.948104 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c" (OuterVolumeSpecName: "kube-api-access-wnt4c") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "kube-api-access-wnt4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.949223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 18:00:36 crc kubenswrapper[4687]: I1203 18:00:36.986257 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.004079 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-51b4-account-create-update-8jd4x"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.015452 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.019632 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data" (OuterVolumeSpecName: "config-data") pod "b1b60fd3-9d07-4696-8ccf-540ce446eb7b" (UID: "b1b60fd3-9d07-4696-8ccf-540ce446eb7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040552 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040630 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwbxc\" (UniqueName: \"kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040785 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040810 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040855 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040892 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.040960 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle\") pod \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\" (UID: \"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e\") " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041471 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041494 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041503 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041525 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041534 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnt4c\" (UniqueName: \"kubernetes.io/projected/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-kube-api-access-wnt4c\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041545 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041553 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041562 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b1b60fd3-9d07-4696-8ccf-540ce446eb7b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.041594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs" (OuterVolumeSpecName: "logs") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.043843 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.045840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc" (OuterVolumeSpecName: "kube-api-access-nwbxc") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "kube-api-access-nwbxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.047750 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts" (OuterVolumeSpecName: "scripts") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.049216 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.080583 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.122735 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.149987 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150220 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150292 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwbxc\" (UniqueName: \"kubernetes.io/projected/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-kube-api-access-nwbxc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150345 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150403 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150489 4687 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.150544 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.175363 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.180287 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data" (OuterVolumeSpecName: "config-data") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.202799 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.202999 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" (UID: "d5ec96d2-f6a4-4311-b80e-607bdfbbd52e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.203047 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.215730 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.217985 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="sg-core" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218033 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="sg-core" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218051 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218057 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218068 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-notification-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218074 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-notification-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218162 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218169 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218188 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218195 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218205 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-central-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218236 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-central-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218259 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="proxy-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218264 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="proxy-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: E1203 18:00:37.218273 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218279 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218516 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-notification-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218549 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218562 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218569 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="sg-core" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218581 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" containerName="glance-log" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218589 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="proxy-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.218600 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" containerName="ceilometer-central-agent" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.221165 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" containerName="glance-httpd" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.225148 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.225277 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.230423 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.231076 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.252475 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.252563 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.252617 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.361526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.361926 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.361963 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.361991 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.362042 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.362098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq789\" (UniqueName: \"kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.362140 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.389044 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-jnf9v"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.394848 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7bd478575-t6xjs" podUID="70063881-c779-4ed9-9258-a175b3ee15f4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.397955 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7j48z"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.458380 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c61d3128-48a6-4b81-a02e-e69a7bfd1b6b" path="/var/lib/kubelet/pods/c61d3128-48a6-4b81-a02e-e69a7bfd1b6b/volumes" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.460163 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2750-account-create-update-rhllc"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.468503 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.470351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq789\" (UniqueName: \"kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.470656 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.470802 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.470992 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.471143 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.471287 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.470197 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.472159 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.477965 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.478549 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.494068 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7bd478575-t6xjs" event={"ID":"70063881-c779-4ed9-9258-a175b3ee15f4","Type":"ContainerStarted","Data":"27578c07e291a74fafad304050103a9596e37bef2e3a96a67be5117f1da86a9d"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.497585 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq789\" (UniqueName: \"kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.499214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.499548 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" event={"ID":"9190e920-62f7-4123-925b-f7d47371df49","Type":"ContainerStarted","Data":"16cc30c979a0d4dbf89d5ca9aa632da49f6c7654030b69e847d08378a9b20aa9"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.499579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" event={"ID":"9190e920-62f7-4123-925b-f7d47371df49","Type":"ContainerStarted","Data":"3c287e329dfed41c943ca419bab4698f51397d16d86a2f4a8ffa963e4ebe1fa0"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.499701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data\") pod \"ceilometer-0\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.500644 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0b6f-account-create-update-fk9cq"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.504448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jnf9v" event={"ID":"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b","Type":"ContainerStarted","Data":"fb39a096fb1768285c8a71bee5abaac1015aa624b5a1d8ab945bfc3db30dd44f"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.505666 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jtncf" event={"ID":"dd6c641d-c691-45d3-8549-25373fef300c","Type":"ContainerStarted","Data":"8706480a131e77d50343d17fa60cc50047de6869b00a0b90670bfd2167eec2be"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.505933 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-7bd478575-t6xjs" podUID="70063881-c779-4ed9-9258-a175b3ee15f4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.512712 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.512700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b1b60fd3-9d07-4696-8ccf-540ce446eb7b","Type":"ContainerDied","Data":"c95cc206bdc64ca5ea050ed1dcfcf393504ff24de192f5917a92d6c0cf75ae67"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.512803 4687 scope.go:117] "RemoveContainer" containerID="438523e3d2999130dea41de7ac0d605343b6151204268e021bf10fa5e804885a" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.514655 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jtncf"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.517275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d5ec96d2-f6a4-4311-b80e-607bdfbbd52e","Type":"ContainerDied","Data":"85f06f6d7cd50cfabfc1e2248813b39313dfa93ff7ad39b11a69224f21da520e"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.517392 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.521811 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7j48z" event={"ID":"08b03712-0693-4868-844b-2238f9703459","Type":"ContainerStarted","Data":"69871048f8e6a9bba6e5371fa4ef22d1d8022326add30fd791abb7aebeaa5ce6"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.531830 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2750-account-create-update-rhllc" event={"ID":"aa60c9ab-b67f-4480-8bf3-7027c68166c5","Type":"ContainerStarted","Data":"d5dcfa2272bee0a7cbd5ff9d0c4fc287c8923844779d94e223c48abaa3d8cc5d"} Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.551788 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.618340 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" podStartSLOduration=3.618318588 podStartE2EDuration="3.618318588s" podCreationTimestamp="2025-12-03 18:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:37.522963344 +0000 UTC m=+1270.413658787" watchObservedRunningTime="2025-12-03 18:00:37.618318588 +0000 UTC m=+1270.509014021" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.634193 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.646192 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.651542 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.659791 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.668477 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.670478 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.672833 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.672933 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hh25f" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.673105 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.673314 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.677608 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.685009 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.687576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.689584 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.689770 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.703086 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.708720 4687 scope.go:117] "RemoveContainer" containerID="b8b8e9567bb8052c7af385d033a0bd56025ab703e11a800592b0aa5a4ea127c3" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.803931 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-config-data\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804244 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804360 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804401 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804440 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-logs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804472 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804496 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbhp\" (UniqueName: \"kubernetes.io/projected/a919d81a-089d-4146-a4ee-c2db16491d11-kube-api-access-lmbhp\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804609 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804675 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804714 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-scripts\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcn4\" (UniqueName: \"kubernetes.io/projected/c3524def-b150-4d8d-9315-b4435781cf34-kube-api-access-jfcn4\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804822 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.804869 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.906532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.906934 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcn4\" (UniqueName: \"kubernetes.io/projected/c3524def-b150-4d8d-9315-b4435781cf34-kube-api-access-jfcn4\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.906972 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907019 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-config-data\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907072 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907151 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907194 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907233 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-logs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907291 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmbhp\" (UniqueName: \"kubernetes.io/projected/a919d81a-089d-4146-a4ee-c2db16491d11-kube-api-access-lmbhp\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907340 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907415 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907456 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.907496 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-scripts\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.908207 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.910020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.910062 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.910432 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-logs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.912168 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a919d81a-089d-4146-a4ee-c2db16491d11-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.914190 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3524def-b150-4d8d-9315-b4435781cf34-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.918075 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.920563 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.921038 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-config-data\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.921714 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.922578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.935217 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.936422 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a919d81a-089d-4146-a4ee-c2db16491d11-scripts\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.936526 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3524def-b150-4d8d-9315-b4435781cf34-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.936977 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcn4\" (UniqueName: \"kubernetes.io/projected/c3524def-b150-4d8d-9315-b4435781cf34-kube-api-access-jfcn4\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.958537 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmbhp\" (UniqueName: \"kubernetes.io/projected/a919d81a-089d-4146-a4ee-c2db16491d11-kube-api-access-lmbhp\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.980879 4687 scope.go:117] "RemoveContainer" containerID="be7d3f13d113d001caffcacb29157eb5808f0aa43792c298dc3540709114c41f" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.985671 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"c3524def-b150-4d8d-9315-b4435781cf34\") " pod="openstack/glance-default-internal-api-0" Dec 03 18:00:37 crc kubenswrapper[4687]: I1203 18:00:37.986523 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"a919d81a-089d-4146-a4ee-c2db16491d11\") " pod="openstack/glance-default-external-api-0" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.170883 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.210448 4687 scope.go:117] "RemoveContainer" containerID="c9b6710490130851c7c0c4cd38651ab01ce2c4618dc004e0ec2c0ec17b932425" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.254313 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.341068 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.550155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" event={"ID":"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb","Type":"ContainerStarted","Data":"598a3a2004c1cd6f0544d541eb18085eab1d8f72f01283a806f2b0ae29d26b6a"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.551130 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" event={"ID":"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb","Type":"ContainerStarted","Data":"eb2c860058e7ada130cb01fd11cde7567c09485118f4ac495119b84fa4128aaa"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.579699 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" podStartSLOduration=4.579676307 podStartE2EDuration="4.579676307s" podCreationTimestamp="2025-12-03 18:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:38.57496783 +0000 UTC m=+1271.465663263" watchObservedRunningTime="2025-12-03 18:00:38.579676307 +0000 UTC m=+1271.470371730" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.587978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" event={"ID":"9190e920-62f7-4123-925b-f7d47371df49","Type":"ContainerDied","Data":"16cc30c979a0d4dbf89d5ca9aa632da49f6c7654030b69e847d08378a9b20aa9"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.587822 4687 generic.go:334] "Generic (PLEG): container finished" podID="9190e920-62f7-4123-925b-f7d47371df49" containerID="16cc30c979a0d4dbf89d5ca9aa632da49f6c7654030b69e847d08378a9b20aa9" exitCode=0 Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.593890 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerStarted","Data":"ea63c6c352933c5a9ed3a6c373631583527bc0c3ab5fc2f6a46afafa9a22f8ae"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.595660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7j48z" event={"ID":"08b03712-0693-4868-844b-2238f9703459","Type":"ContainerStarted","Data":"e96d3423b0c75c64adf41de778f58b1a0d4db53f25c2268ae661428cab33ecec"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.603303 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2750-account-create-update-rhllc" event={"ID":"aa60c9ab-b67f-4480-8bf3-7027c68166c5","Type":"ContainerStarted","Data":"2fa61684b1b57e8e27b69aaa3cc1f4fc3dd878c8650cbd7c2e098398551bbacf"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.613808 4687 generic.go:334] "Generic (PLEG): container finished" podID="1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" containerID="4a98f67f8ed74b45e48dbb6a8dcd4d35c322e73bb89821943833a348f1515dac" exitCode=0 Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.613884 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jnf9v" event={"ID":"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b","Type":"ContainerDied","Data":"4a98f67f8ed74b45e48dbb6a8dcd4d35c322e73bb89821943833a348f1515dac"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.645057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jtncf" event={"ID":"dd6c641d-c691-45d3-8549-25373fef300c","Type":"ContainerStarted","Data":"836b2b57a9c16d200864574b58edd4b9eb0e851e37ffad6192b2db07fd64c4de"} Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.645071 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-7j48z" podStartSLOduration=4.645049412 podStartE2EDuration="4.645049412s" podCreationTimestamp="2025-12-03 18:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:38.644805864 +0000 UTC m=+1271.535501297" watchObservedRunningTime="2025-12-03 18:00:38.645049412 +0000 UTC m=+1271.535744845" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.655079 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.683020 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2750-account-create-update-rhllc" podStartSLOduration=4.682994146 podStartE2EDuration="4.682994146s" podCreationTimestamp="2025-12-03 18:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:38.668826694 +0000 UTC m=+1271.559522117" watchObservedRunningTime="2025-12-03 18:00:38.682994146 +0000 UTC m=+1271.573689579" Dec 03 18:00:38 crc kubenswrapper[4687]: I1203 18:00:38.694967 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-jtncf" podStartSLOduration=4.694946738 podStartE2EDuration="4.694946738s" podCreationTimestamp="2025-12-03 18:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:38.687697832 +0000 UTC m=+1271.578393265" watchObservedRunningTime="2025-12-03 18:00:38.694946738 +0000 UTC m=+1271.585642171" Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.042427 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.133416 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.424015 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b60fd3-9d07-4696-8ccf-540ce446eb7b" path="/var/lib/kubelet/pods/b1b60fd3-9d07-4696-8ccf-540ce446eb7b/volumes" Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.424984 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ec96d2-f6a4-4311-b80e-607bdfbbd52e" path="/var/lib/kubelet/pods/d5ec96d2-f6a4-4311-b80e-607bdfbbd52e/volumes" Dec 03 18:00:39 crc kubenswrapper[4687]: E1203 18:00:39.578978 4687 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/504cf3a03232773c68ad1e8f28605f43c29b24aaf68d9f3ba5654bd99b30a66d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/504cf3a03232773c68ad1e8f28605f43c29b24aaf68d9f3ba5654bd99b30a66d/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_glance-default-external-api-0_b1b60fd3-9d07-4696-8ccf-540ce446eb7b/glance-log/0.log" to get inode usage: stat /var/log/pods/openstack_glance-default-external-api-0_b1b60fd3-9d07-4696-8ccf-540ce446eb7b/glance-log/0.log: no such file or directory Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.657776 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerStarted","Data":"44a202dedd8093b47f6750f99b8af02dd10ae8f9327f2fe867d1d3fae8741c79"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.661345 4687 generic.go:334] "Generic (PLEG): container finished" podID="08b03712-0693-4868-844b-2238f9703459" containerID="e96d3423b0c75c64adf41de778f58b1a0d4db53f25c2268ae661428cab33ecec" exitCode=0 Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.661412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7j48z" event={"ID":"08b03712-0693-4868-844b-2238f9703459","Type":"ContainerDied","Data":"e96d3423b0c75c64adf41de778f58b1a0d4db53f25c2268ae661428cab33ecec"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.663148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a919d81a-089d-4146-a4ee-c2db16491d11","Type":"ContainerStarted","Data":"64ca6f43a6a0a7b94d13501de9bc41a4ae6d3efb9bad3e77db66e468f6db91f4"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.667234 4687 generic.go:334] "Generic (PLEG): container finished" podID="aa60c9ab-b67f-4480-8bf3-7027c68166c5" containerID="2fa61684b1b57e8e27b69aaa3cc1f4fc3dd878c8650cbd7c2e098398551bbacf" exitCode=0 Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.667304 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2750-account-create-update-rhllc" event={"ID":"aa60c9ab-b67f-4480-8bf3-7027c68166c5","Type":"ContainerDied","Data":"2fa61684b1b57e8e27b69aaa3cc1f4fc3dd878c8650cbd7c2e098398551bbacf"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.671531 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3524def-b150-4d8d-9315-b4435781cf34","Type":"ContainerStarted","Data":"00391742c1f919ee034aba7f9247b7cb1fd168266cbb3d62643b38061e51c256"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.674439 4687 generic.go:334] "Generic (PLEG): container finished" podID="e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" containerID="598a3a2004c1cd6f0544d541eb18085eab1d8f72f01283a806f2b0ae29d26b6a" exitCode=0 Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.674510 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" event={"ID":"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb","Type":"ContainerDied","Data":"598a3a2004c1cd6f0544d541eb18085eab1d8f72f01283a806f2b0ae29d26b6a"} Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.676781 4687 generic.go:334] "Generic (PLEG): container finished" podID="dd6c641d-c691-45d3-8549-25373fef300c" containerID="836b2b57a9c16d200864574b58edd4b9eb0e851e37ffad6192b2db07fd64c4de" exitCode=0 Dec 03 18:00:39 crc kubenswrapper[4687]: I1203 18:00:39.677099 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jtncf" event={"ID":"dd6c641d-c691-45d3-8549-25373fef300c","Type":"ContainerDied","Data":"836b2b57a9c16d200864574b58edd4b9eb0e851e37ffad6192b2db07fd64c4de"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.141595 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.184085 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts\") pod \"9190e920-62f7-4123-925b-f7d47371df49\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.184616 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmsq\" (UniqueName: \"kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq\") pod \"9190e920-62f7-4123-925b-f7d47371df49\" (UID: \"9190e920-62f7-4123-925b-f7d47371df49\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.185018 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9190e920-62f7-4123-925b-f7d47371df49" (UID: "9190e920-62f7-4123-925b-f7d47371df49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.185186 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9190e920-62f7-4123-925b-f7d47371df49-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.205360 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq" (OuterVolumeSpecName: "kube-api-access-8nmsq") pod "9190e920-62f7-4123-925b-f7d47371df49" (UID: "9190e920-62f7-4123-925b-f7d47371df49"). InnerVolumeSpecName "kube-api-access-8nmsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.268189 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.286522 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmsq\" (UniqueName: \"kubernetes.io/projected/9190e920-62f7-4123-925b-f7d47371df49-kube-api-access-8nmsq\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.388080 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-984f5\" (UniqueName: \"kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5\") pod \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.388180 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts\") pod \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\" (UID: \"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.389041 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" (UID: "1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.405776 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5" (OuterVolumeSpecName: "kube-api-access-984f5") pod "1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" (UID: "1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b"). InnerVolumeSpecName "kube-api-access-984f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: E1203 18:00:40.416961 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2559a1aa_62c1_43b3_9183_66ebe4d8efc9.slice/crio-conmon-e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b60fd3_9d07_4696_8ccf_540ce446eb7b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc61d3128_48a6_4b81_a02e_e69a7bfd1b6b.slice/crio-066a7e26933d89eb170c9a5fbf369df7372e690e67548f0135b709a5b4ace105\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ec96d2_f6a4_4311_b80e_607bdfbbd52e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2559a1aa_62c1_43b3_9183_66ebe4d8efc9.slice/crio-e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ec96d2_f6a4_4311_b80e_607bdfbbd52e.slice/crio-85f06f6d7cd50cfabfc1e2248813b39313dfa93ff7ad39b11a69224f21da520e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1b60fd3_9d07_4696_8ccf_540ce446eb7b.slice/crio-c95cc206bdc64ca5ea050ed1dcfcf393504ff24de192f5917a92d6c0cf75ae67\": RecentStats: unable to find data in memory cache]" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.479481 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.491359 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-984f5\" (UniqueName: \"kubernetes.io/projected/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-kube-api-access-984f5\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.491407 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.598869 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599191 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599364 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkrfc\" (UniqueName: \"kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599422 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.599446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data\") pod \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\" (UID: \"2559a1aa-62c1-43b3-9183-66ebe4d8efc9\") " Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.600482 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs" (OuterVolumeSpecName: "logs") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.628337 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc" (OuterVolumeSpecName: "kube-api-access-nkrfc") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "kube-api-access-nkrfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.629464 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.701281 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.701317 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkrfc\" (UniqueName: \"kubernetes.io/projected/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-kube-api-access-nkrfc\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.701330 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.725273 4687 generic.go:334] "Generic (PLEG): container finished" podID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerID="e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824" exitCode=137 Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.725345 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerDied","Data":"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.725379 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58975c669d-5qj7w" event={"ID":"2559a1aa-62c1-43b3-9183-66ebe4d8efc9","Type":"ContainerDied","Data":"f12a5e5077f32685f2916c5b1125ff6c0b85114ba7ce323266c039f28d6b5c81"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.725398 4687 scope.go:117] "RemoveContainer" containerID="4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.725524 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58975c669d-5qj7w" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.729885 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a919d81a-089d-4146-a4ee-c2db16491d11","Type":"ContainerStarted","Data":"86488473f880486b7ab0f9565d7f855925b7d46eb57e8cb75e11be4381755067"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.730920 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.731712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3524def-b150-4d8d-9315-b4435781cf34","Type":"ContainerStarted","Data":"3f21360dc5af5a8ad5d7bbfe4f56186078c36cde3944fcbe5b14d47950a152db"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.739714 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" event={"ID":"9190e920-62f7-4123-925b-f7d47371df49","Type":"ContainerDied","Data":"3c287e329dfed41c943ca419bab4698f51397d16d86a2f4a8ffa963e4ebe1fa0"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.739743 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c287e329dfed41c943ca419bab4698f51397d16d86a2f4a8ffa963e4ebe1fa0" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.739805 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51b4-account-create-update-8jd4x" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.749619 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-jnf9v" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.749736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-jnf9v" event={"ID":"1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b","Type":"ContainerDied","Data":"fb39a096fb1768285c8a71bee5abaac1015aa624b5a1d8ab945bfc3db30dd44f"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.749823 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb39a096fb1768285c8a71bee5abaac1015aa624b5a1d8ab945bfc3db30dd44f" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.752946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts" (OuterVolumeSpecName: "scripts") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.755738 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data" (OuterVolumeSpecName: "config-data") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.758845 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerStarted","Data":"3034f1d46b1ee163539765dd420a85843c3a0d2f0c11cb69d37d5c6eb3001be4"} Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.760137 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2559a1aa-62c1-43b3-9183-66ebe4d8efc9" (UID: "2559a1aa-62c1-43b3-9183-66ebe4d8efc9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.803276 4687 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.803307 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.803317 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:40 crc kubenswrapper[4687]: I1203 18:00:40.803325 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2559a1aa-62c1-43b3-9183-66ebe4d8efc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.070799 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.087509 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-58975c669d-5qj7w"] Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.155959 4687 scope.go:117] "RemoveContainer" containerID="e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.181869 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.189519 4687 scope.go:117] "RemoveContainer" containerID="4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd" Dec 03 18:00:41 crc kubenswrapper[4687]: E1203 18:00:41.196737 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd\": container with ID starting with 4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd not found: ID does not exist" containerID="4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.196781 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd"} err="failed to get container status \"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd\": rpc error: code = NotFound desc = could not find container \"4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd\": container with ID starting with 4988d89e382b9a8eed761e25309d8c2c30737b4e2e3215f78b12d492d549c1dd not found: ID does not exist" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.196806 4687 scope.go:117] "RemoveContainer" containerID="e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824" Dec 03 18:00:41 crc kubenswrapper[4687]: E1203 18:00:41.197193 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824\": container with ID starting with e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824 not found: ID does not exist" containerID="e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.197241 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824"} err="failed to get container status \"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824\": rpc error: code = NotFound desc = could not find container \"e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824\": container with ID starting with e23589d8852caf808aba853918ca7d9526164015c121338a0fc5e16b2b1cc824 not found: ID does not exist" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.315804 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts\") pod \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.316327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7jd\" (UniqueName: \"kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd\") pod \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\" (UID: \"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.318984 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" (UID: "e6b7a829-70bb-4e5d-9f72-3f2cf68563fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.325980 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd" (OuterVolumeSpecName: "kube-api-access-ph7jd") pod "e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" (UID: "e6b7a829-70bb-4e5d-9f72-3f2cf68563fb"). InnerVolumeSpecName "kube-api-access-ph7jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.372292 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.379703 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.394754 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.435787 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jk7w\" (UniqueName: \"kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w\") pod \"dd6c641d-c691-45d3-8549-25373fef300c\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.435908 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts\") pod \"dd6c641d-c691-45d3-8549-25373fef300c\" (UID: \"dd6c641d-c691-45d3-8549-25373fef300c\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.435943 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts\") pod \"08b03712-0693-4868-844b-2238f9703459\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.435980 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqsm\" (UniqueName: \"kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm\") pod \"08b03712-0693-4868-844b-2238f9703459\" (UID: \"08b03712-0693-4868-844b-2238f9703459\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.436397 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.436413 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7jd\" (UniqueName: \"kubernetes.io/projected/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb-kube-api-access-ph7jd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.437532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd6c641d-c691-45d3-8549-25373fef300c" (UID: "dd6c641d-c691-45d3-8549-25373fef300c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.442331 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w" (OuterVolumeSpecName: "kube-api-access-2jk7w") pod "dd6c641d-c691-45d3-8549-25373fef300c" (UID: "dd6c641d-c691-45d3-8549-25373fef300c"). InnerVolumeSpecName "kube-api-access-2jk7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.442464 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm" (OuterVolumeSpecName: "kube-api-access-ncqsm") pod "08b03712-0693-4868-844b-2238f9703459" (UID: "08b03712-0693-4868-844b-2238f9703459"). InnerVolumeSpecName "kube-api-access-ncqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.444618 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08b03712-0693-4868-844b-2238f9703459" (UID: "08b03712-0693-4868-844b-2238f9703459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.464108 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" path="/var/lib/kubelet/pods/2559a1aa-62c1-43b3-9183-66ebe4d8efc9/volumes" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.538096 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b99f\" (UniqueName: \"kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f\") pod \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.538373 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts\") pod \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\" (UID: \"aa60c9ab-b67f-4480-8bf3-7027c68166c5\") " Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.538937 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa60c9ab-b67f-4480-8bf3-7027c68166c5" (UID: "aa60c9ab-b67f-4480-8bf3-7027c68166c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.539060 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa60c9ab-b67f-4480-8bf3-7027c68166c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.539087 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd6c641d-c691-45d3-8549-25373fef300c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.539100 4687 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b03712-0693-4868-844b-2238f9703459-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.539111 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncqsm\" (UniqueName: \"kubernetes.io/projected/08b03712-0693-4868-844b-2238f9703459-kube-api-access-ncqsm\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.539143 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jk7w\" (UniqueName: \"kubernetes.io/projected/dd6c641d-c691-45d3-8549-25373fef300c-kube-api-access-2jk7w\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.541747 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f" (OuterVolumeSpecName: "kube-api-access-8b99f") pod "aa60c9ab-b67f-4480-8bf3-7027c68166c5" (UID: "aa60c9ab-b67f-4480-8bf3-7027c68166c5"). InnerVolumeSpecName "kube-api-access-8b99f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.643516 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b99f\" (UniqueName: \"kubernetes.io/projected/aa60c9ab-b67f-4480-8bf3-7027c68166c5-kube-api-access-8b99f\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.767904 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" event={"ID":"e6b7a829-70bb-4e5d-9f72-3f2cf68563fb","Type":"ContainerDied","Data":"eb2c860058e7ada130cb01fd11cde7567c09485118f4ac495119b84fa4128aaa"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.767950 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb2c860058e7ada130cb01fd11cde7567c09485118f4ac495119b84fa4128aaa" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.767919 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0b6f-account-create-update-fk9cq" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.770057 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jtncf" event={"ID":"dd6c641d-c691-45d3-8549-25373fef300c","Type":"ContainerDied","Data":"8706480a131e77d50343d17fa60cc50047de6869b00a0b90670bfd2167eec2be"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.770201 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8706480a131e77d50343d17fa60cc50047de6869b00a0b90670bfd2167eec2be" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.770104 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jtncf" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.773330 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerStarted","Data":"7f8db06f7e3568b8eedc6bd7f84283f2c164d875b67b986ec246fba7a43c1a60"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.774979 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7j48z" event={"ID":"08b03712-0693-4868-844b-2238f9703459","Type":"ContainerDied","Data":"69871048f8e6a9bba6e5371fa4ef22d1d8022326add30fd791abb7aebeaa5ce6"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.775018 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69871048f8e6a9bba6e5371fa4ef22d1d8022326add30fd791abb7aebeaa5ce6" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.775176 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7j48z" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.778353 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a919d81a-089d-4146-a4ee-c2db16491d11","Type":"ContainerStarted","Data":"804c890e750673acc89e4ba858b2d2c95c0f284528d0c7fd784816ba859b4d81"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.779953 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2750-account-create-update-rhllc" event={"ID":"aa60c9ab-b67f-4480-8bf3-7027c68166c5","Type":"ContainerDied","Data":"d5dcfa2272bee0a7cbd5ff9d0c4fc287c8923844779d94e223c48abaa3d8cc5d"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.779976 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5dcfa2272bee0a7cbd5ff9d0c4fc287c8923844779d94e223c48abaa3d8cc5d" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.780175 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2750-account-create-update-rhllc" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.782809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3524def-b150-4d8d-9315-b4435781cf34","Type":"ContainerStarted","Data":"23af1cf6c18910380125364dd4fc1c744b3943cd486b6a244c35726c7d3666fe"} Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.817201 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.817182463 podStartE2EDuration="4.817182463s" podCreationTimestamp="2025-12-03 18:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:41.79703635 +0000 UTC m=+1274.687731793" watchObservedRunningTime="2025-12-03 18:00:41.817182463 +0000 UTC m=+1274.707877896" Dec 03 18:00:41 crc kubenswrapper[4687]: I1203 18:00:41.832095 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.832071936 podStartE2EDuration="4.832071936s" podCreationTimestamp="2025-12-03 18:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:00:41.827237764 +0000 UTC m=+1274.717933197" watchObservedRunningTime="2025-12-03 18:00:41.832071936 +0000 UTC m=+1274.722767369" Dec 03 18:00:42 crc kubenswrapper[4687]: I1203 18:00:42.795846 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerStarted","Data":"ae72de542c2784d7d679eab94644924f5b5ad0de5b71d4f82dff9d979c35557f"} Dec 03 18:00:42 crc kubenswrapper[4687]: I1203 18:00:42.824835 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.626635275 podStartE2EDuration="5.824784531s" podCreationTimestamp="2025-12-03 18:00:37 +0000 UTC" firstStartedPulling="2025-12-03 18:00:38.380813649 +0000 UTC m=+1271.271509082" lastFinishedPulling="2025-12-03 18:00:42.578962905 +0000 UTC m=+1275.469658338" observedRunningTime="2025-12-03 18:00:42.818747698 +0000 UTC m=+1275.709443141" watchObservedRunningTime="2025-12-03 18:00:42.824784531 +0000 UTC m=+1275.715479984" Dec 03 18:00:43 crc kubenswrapper[4687]: I1203 18:00:43.388104 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7bd478575-t6xjs" Dec 03 18:00:43 crc kubenswrapper[4687]: I1203 18:00:43.804690 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.111921 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.112484 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.984885 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4npbh"] Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985372 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985390 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985406 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985413 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985445 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon-log" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985454 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon-log" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985467 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9190e920-62f7-4123-925b-f7d47371df49" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985474 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9190e920-62f7-4123-925b-f7d47371df49" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985500 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6c641d-c691-45d3-8549-25373fef300c" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985507 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6c641d-c691-45d3-8549-25373fef300c" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985524 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa60c9ab-b67f-4480-8bf3-7027c68166c5" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985532 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa60c9ab-b67f-4480-8bf3-7027c68166c5" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985546 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b03712-0693-4868-844b-2238f9703459" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985554 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b03712-0693-4868-844b-2238f9703459" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: E1203 18:00:44.985564 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.985572 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986571 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon-log" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986596 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2559a1aa-62c1-43b3-9183-66ebe4d8efc9" containerName="horizon" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986607 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6c641d-c691-45d3-8549-25373fef300c" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986618 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa60c9ab-b67f-4480-8bf3-7027c68166c5" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986633 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986651 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986663 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b03712-0693-4868-844b-2238f9703459" containerName="mariadb-database-create" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.986681 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9190e920-62f7-4123-925b-f7d47371df49" containerName="mariadb-account-create-update" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.987452 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.997303 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4npbh"] Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.999646 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-75tc5" Dec 03 18:00:44 crc kubenswrapper[4687]: I1203 18:00:44.999897 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.000113 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.105147 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.105262 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.105309 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.105458 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcms\" (UniqueName: \"kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.206938 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvcms\" (UniqueName: \"kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.207030 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.207092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.207149 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.214889 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.215526 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.222209 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.241823 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvcms\" (UniqueName: \"kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms\") pod \"nova-cell0-conductor-db-sync-4npbh\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.319455 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.794890 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4npbh"] Dec 03 18:00:45 crc kubenswrapper[4687]: I1203 18:00:45.820924 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4npbh" event={"ID":"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e","Type":"ContainerStarted","Data":"f63b17351902caf952388188908a9c75d54c440064b8235ebf7113a5351ea60d"} Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.172425 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.172698 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.206088 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.215049 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.255418 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.255468 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.287147 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.299905 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.850634 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.850931 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.850944 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 18:00:48 crc kubenswrapper[4687]: I1203 18:00:48.850955 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 03 18:00:50 crc kubenswrapper[4687]: I1203 18:00:50.866774 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 18:00:50 crc kubenswrapper[4687]: I1203 18:00:50.867283 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 18:00:51 crc kubenswrapper[4687]: I1203 18:00:51.047961 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:51 crc kubenswrapper[4687]: I1203 18:00:51.048845 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 03 18:00:51 crc kubenswrapper[4687]: I1203 18:00:51.055946 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 18:00:51 crc kubenswrapper[4687]: I1203 18:00:51.056070 4687 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 18:00:51 crc kubenswrapper[4687]: I1203 18:00:51.076573 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.559233 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.560089 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-central-agent" containerID="cri-o://44a202dedd8093b47f6750f99b8af02dd10ae8f9327f2fe867d1d3fae8741c79" gracePeriod=30 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.560197 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="sg-core" containerID="cri-o://7f8db06f7e3568b8eedc6bd7f84283f2c164d875b67b986ec246fba7a43c1a60" gracePeriod=30 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.560210 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-notification-agent" containerID="cri-o://3034f1d46b1ee163539765dd420a85843c3a0d2f0c11cb69d37d5c6eb3001be4" gracePeriod=30 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.560357 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="proxy-httpd" containerID="cri-o://ae72de542c2784d7d679eab94644924f5b5ad0de5b71d4f82dff9d979c35557f" gracePeriod=30 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.575692 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.178:3000/\": EOF" Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.900662 4687 generic.go:334] "Generic (PLEG): container finished" podID="430a5e1c-3677-42fe-8208-584dbf689995" containerID="ae72de542c2784d7d679eab94644924f5b5ad0de5b71d4f82dff9d979c35557f" exitCode=0 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.900693 4687 generic.go:334] "Generic (PLEG): container finished" podID="430a5e1c-3677-42fe-8208-584dbf689995" containerID="7f8db06f7e3568b8eedc6bd7f84283f2c164d875b67b986ec246fba7a43c1a60" exitCode=2 Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.900711 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerDied","Data":"ae72de542c2784d7d679eab94644924f5b5ad0de5b71d4f82dff9d979c35557f"} Dec 03 18:00:53 crc kubenswrapper[4687]: I1203 18:00:53.900736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerDied","Data":"7f8db06f7e3568b8eedc6bd7f84283f2c164d875b67b986ec246fba7a43c1a60"} Dec 03 18:00:54 crc kubenswrapper[4687]: I1203 18:00:54.910602 4687 generic.go:334] "Generic (PLEG): container finished" podID="430a5e1c-3677-42fe-8208-584dbf689995" containerID="44a202dedd8093b47f6750f99b8af02dd10ae8f9327f2fe867d1d3fae8741c79" exitCode=0 Dec 03 18:00:54 crc kubenswrapper[4687]: I1203 18:00:54.910701 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerDied","Data":"44a202dedd8093b47f6750f99b8af02dd10ae8f9327f2fe867d1d3fae8741c79"} Dec 03 18:00:54 crc kubenswrapper[4687]: I1203 18:00:54.913503 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4npbh" event={"ID":"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e","Type":"ContainerStarted","Data":"1b6988722fd79c441f7cdfbf33e17fcfe74a4f5f1351124f554fd4be6476708b"} Dec 03 18:00:54 crc kubenswrapper[4687]: I1203 18:00:54.932581 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-4npbh" podStartSLOduration=2.6600374 podStartE2EDuration="10.932562283s" podCreationTimestamp="2025-12-03 18:00:44 +0000 UTC" firstStartedPulling="2025-12-03 18:00:45.779618097 +0000 UTC m=+1278.670313530" lastFinishedPulling="2025-12-03 18:00:54.05214298 +0000 UTC m=+1286.942838413" observedRunningTime="2025-12-03 18:00:54.929536422 +0000 UTC m=+1287.820231855" watchObservedRunningTime="2025-12-03 18:00:54.932562283 +0000 UTC m=+1287.823257716" Dec 03 18:00:56 crc kubenswrapper[4687]: I1203 18:00:56.946268 4687 generic.go:334] "Generic (PLEG): container finished" podID="430a5e1c-3677-42fe-8208-584dbf689995" containerID="3034f1d46b1ee163539765dd420a85843c3a0d2f0c11cb69d37d5c6eb3001be4" exitCode=0 Dec 03 18:00:56 crc kubenswrapper[4687]: I1203 18:00:56.946599 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerDied","Data":"3034f1d46b1ee163539765dd420a85843c3a0d2f0c11cb69d37d5c6eb3001be4"} Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.593843 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.649861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.649905 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.649930 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.649954 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.650006 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.650041 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.650082 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq789\" (UniqueName: \"kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789\") pod \"430a5e1c-3677-42fe-8208-584dbf689995\" (UID: \"430a5e1c-3677-42fe-8208-584dbf689995\") " Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.651414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.651779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.660347 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789" (OuterVolumeSpecName: "kube-api-access-fq789") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "kube-api-access-fq789". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.671551 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts" (OuterVolumeSpecName: "scripts") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.689426 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.745643 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755798 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755832 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755843 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq789\" (UniqueName: \"kubernetes.io/projected/430a5e1c-3677-42fe-8208-584dbf689995-kube-api-access-fq789\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755855 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430a5e1c-3677-42fe-8208-584dbf689995-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755867 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.755879 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.764416 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data" (OuterVolumeSpecName: "config-data") pod "430a5e1c-3677-42fe-8208-584dbf689995" (UID: "430a5e1c-3677-42fe-8208-584dbf689995"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.857731 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430a5e1c-3677-42fe-8208-584dbf689995-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.956865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430a5e1c-3677-42fe-8208-584dbf689995","Type":"ContainerDied","Data":"ea63c6c352933c5a9ed3a6c373631583527bc0c3ab5fc2f6a46afafa9a22f8ae"} Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.957806 4687 scope.go:117] "RemoveContainer" containerID="ae72de542c2784d7d679eab94644924f5b5ad0de5b71d4f82dff9d979c35557f" Dec 03 18:00:57 crc kubenswrapper[4687]: I1203 18:00:57.957200 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.001681 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.011858 4687 scope.go:117] "RemoveContainer" containerID="7f8db06f7e3568b8eedc6bd7f84283f2c164d875b67b986ec246fba7a43c1a60" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.014970 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.041377 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:58 crc kubenswrapper[4687]: E1203 18:00:58.042013 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="sg-core" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.042116 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="sg-core" Dec 03 18:00:58 crc kubenswrapper[4687]: E1203 18:00:58.042230 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="proxy-httpd" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.042297 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="proxy-httpd" Dec 03 18:00:58 crc kubenswrapper[4687]: E1203 18:00:58.042552 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-notification-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.042621 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-notification-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: E1203 18:00:58.042711 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-central-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.042778 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-central-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.043525 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-notification-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.043658 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="ceilometer-central-agent" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.043737 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="proxy-httpd" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.043832 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="430a5e1c-3677-42fe-8208-584dbf689995" containerName="sg-core" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.045880 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.049699 4687 scope.go:117] "RemoveContainer" containerID="3034f1d46b1ee163539765dd420a85843c3a0d2f0c11cb69d37d5c6eb3001be4" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.049934 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.050208 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.072620 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.097336 4687 scope.go:117] "RemoveContainer" containerID="44a202dedd8093b47f6750f99b8af02dd10ae8f9327f2fe867d1d3fae8741c79" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167355 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167414 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167491 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqf46\" (UniqueName: \"kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167701 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.167726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.268889 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.268984 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.269004 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.269028 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.269051 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.269103 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqf46\" (UniqueName: \"kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.269176 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.270184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.270464 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.274819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.276412 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.276706 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.286114 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqf46\" (UniqueName: \"kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.304845 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts\") pod \"ceilometer-0\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.374761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.866355 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:58 crc kubenswrapper[4687]: I1203 18:00:58.967239 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerStarted","Data":"ce99d72c367972fa32cada10dd31667f97ec6717371d81b040973809dc71a0d7"} Dec 03 18:00:59 crc kubenswrapper[4687]: I1203 18:00:59.439292 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430a5e1c-3677-42fe-8208-584dbf689995" path="/var/lib/kubelet/pods/430a5e1c-3677-42fe-8208-584dbf689995/volumes" Dec 03 18:00:59 crc kubenswrapper[4687]: I1203 18:00:59.482599 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:00:59 crc kubenswrapper[4687]: I1203 18:00:59.982299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerStarted","Data":"886aee6ab03fc307da8864eba8b01b46dd6030d060501042b506ebab1837d5ea"} Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.152363 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29413081-r8x9h"] Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.154052 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.161423 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413081-r8x9h"] Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.209368 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.209427 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.209485 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.209568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjbxg\" (UniqueName: \"kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.311589 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjbxg\" (UniqueName: \"kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.311686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.311731 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.311776 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.317842 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.320929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.328731 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.331081 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjbxg\" (UniqueName: \"kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg\") pod \"keystone-cron-29413081-r8x9h\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.483877 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:00 crc kubenswrapper[4687]: I1203 18:01:00.915960 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29413081-r8x9h"] Dec 03 18:01:00 crc kubenswrapper[4687]: W1203 18:01:00.917780 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb058710c_db65_4f53_b9b7_2e279672355a.slice/crio-b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212 WatchSource:0}: Error finding container b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212: Status 404 returned error can't find the container with id b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212 Dec 03 18:01:01 crc kubenswrapper[4687]: I1203 18:01:01.003416 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413081-r8x9h" event={"ID":"b058710c-db65-4f53-b9b7-2e279672355a","Type":"ContainerStarted","Data":"b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212"} Dec 03 18:01:01 crc kubenswrapper[4687]: I1203 18:01:01.007633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerStarted","Data":"9e99c91874d27ef4815915603e2cee8806d49869070d969f84d43194d4f33e40"} Dec 03 18:01:01 crc kubenswrapper[4687]: I1203 18:01:01.007687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerStarted","Data":"ca9fae0fa75bb007b705c125c9b69e6854c92f603a51767ddc4783554446b264"} Dec 03 18:01:02 crc kubenswrapper[4687]: I1203 18:01:02.019781 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413081-r8x9h" event={"ID":"b058710c-db65-4f53-b9b7-2e279672355a","Type":"ContainerStarted","Data":"8b3d9a9f8ec05392b09ee22853f9211b75f07b40097798f6ba75982771d6fab3"} Dec 03 18:01:02 crc kubenswrapper[4687]: I1203 18:01:02.041699 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29413081-r8x9h" podStartSLOduration=2.041671744 podStartE2EDuration="2.041671744s" podCreationTimestamp="2025-12-03 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:02.034947942 +0000 UTC m=+1294.925643415" watchObservedRunningTime="2025-12-03 18:01:02.041671744 +0000 UTC m=+1294.932367187" Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081015 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-central-agent" containerID="cri-o://886aee6ab03fc307da8864eba8b01b46dd6030d060501042b506ebab1837d5ea" gracePeriod=30 Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081469 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-notification-agent" containerID="cri-o://ca9fae0fa75bb007b705c125c9b69e6854c92f603a51767ddc4783554446b264" gracePeriod=30 Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081409 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="proxy-httpd" containerID="cri-o://8007580a35839ec827db1fa2ef5a318eca63581ca4205b1d4e2c75d5b3a91650" gracePeriod=30 Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081293 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerStarted","Data":"8007580a35839ec827db1fa2ef5a318eca63581ca4205b1d4e2c75d5b3a91650"} Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081794 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.081741 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="sg-core" containerID="cri-o://9e99c91874d27ef4815915603e2cee8806d49869070d969f84d43194d4f33e40" gracePeriod=30 Dec 03 18:01:03 crc kubenswrapper[4687]: I1203 18:01:03.117343 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.3083311850000001 podStartE2EDuration="5.117318677s" podCreationTimestamp="2025-12-03 18:00:58 +0000 UTC" firstStartedPulling="2025-12-03 18:00:58.87618339 +0000 UTC m=+1291.766878823" lastFinishedPulling="2025-12-03 18:01:02.685170872 +0000 UTC m=+1295.575866315" observedRunningTime="2025-12-03 18:01:03.108444958 +0000 UTC m=+1295.999140391" watchObservedRunningTime="2025-12-03 18:01:03.117318677 +0000 UTC m=+1296.008014110" Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.093529 4687 generic.go:334] "Generic (PLEG): container finished" podID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerID="9e99c91874d27ef4815915603e2cee8806d49869070d969f84d43194d4f33e40" exitCode=2 Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.093875 4687 generic.go:334] "Generic (PLEG): container finished" podID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerID="ca9fae0fa75bb007b705c125c9b69e6854c92f603a51767ddc4783554446b264" exitCode=0 Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.093592 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerDied","Data":"9e99c91874d27ef4815915603e2cee8806d49869070d969f84d43194d4f33e40"} Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.093970 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerDied","Data":"ca9fae0fa75bb007b705c125c9b69e6854c92f603a51767ddc4783554446b264"} Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.095797 4687 generic.go:334] "Generic (PLEG): container finished" podID="b058710c-db65-4f53-b9b7-2e279672355a" containerID="8b3d9a9f8ec05392b09ee22853f9211b75f07b40097798f6ba75982771d6fab3" exitCode=0 Dec 03 18:01:04 crc kubenswrapper[4687]: I1203 18:01:04.095875 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413081-r8x9h" event={"ID":"b058710c-db65-4f53-b9b7-2e279672355a","Type":"ContainerDied","Data":"8b3d9a9f8ec05392b09ee22853f9211b75f07b40097798f6ba75982771d6fab3"} Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.415445 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.511614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys\") pod \"b058710c-db65-4f53-b9b7-2e279672355a\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.511858 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle\") pod \"b058710c-db65-4f53-b9b7-2e279672355a\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.511944 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjbxg\" (UniqueName: \"kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg\") pod \"b058710c-db65-4f53-b9b7-2e279672355a\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.512003 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data\") pod \"b058710c-db65-4f53-b9b7-2e279672355a\" (UID: \"b058710c-db65-4f53-b9b7-2e279672355a\") " Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.517630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg" (OuterVolumeSpecName: "kube-api-access-vjbxg") pod "b058710c-db65-4f53-b9b7-2e279672355a" (UID: "b058710c-db65-4f53-b9b7-2e279672355a"). InnerVolumeSpecName "kube-api-access-vjbxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.522855 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b058710c-db65-4f53-b9b7-2e279672355a" (UID: "b058710c-db65-4f53-b9b7-2e279672355a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.544377 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b058710c-db65-4f53-b9b7-2e279672355a" (UID: "b058710c-db65-4f53-b9b7-2e279672355a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.566346 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data" (OuterVolumeSpecName: "config-data") pod "b058710c-db65-4f53-b9b7-2e279672355a" (UID: "b058710c-db65-4f53-b9b7-2e279672355a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.614657 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.614687 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjbxg\" (UniqueName: \"kubernetes.io/projected/b058710c-db65-4f53-b9b7-2e279672355a-kube-api-access-vjbxg\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.614698 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:05 crc kubenswrapper[4687]: I1203 18:01:05.614708 4687 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b058710c-db65-4f53-b9b7-2e279672355a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:06 crc kubenswrapper[4687]: I1203 18:01:06.116203 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29413081-r8x9h" event={"ID":"b058710c-db65-4f53-b9b7-2e279672355a","Type":"ContainerDied","Data":"b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212"} Dec 03 18:01:06 crc kubenswrapper[4687]: I1203 18:01:06.116250 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18464cd077af42bc2b95d59a270942a4884378b6db17b7faff060ccf2b48212" Dec 03 18:01:06 crc kubenswrapper[4687]: I1203 18:01:06.116737 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29413081-r8x9h" Dec 03 18:01:08 crc kubenswrapper[4687]: I1203 18:01:08.134384 4687 generic.go:334] "Generic (PLEG): container finished" podID="e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" containerID="1b6988722fd79c441f7cdfbf33e17fcfe74a4f5f1351124f554fd4be6476708b" exitCode=0 Dec 03 18:01:08 crc kubenswrapper[4687]: I1203 18:01:08.134484 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4npbh" event={"ID":"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e","Type":"ContainerDied","Data":"1b6988722fd79c441f7cdfbf33e17fcfe74a4f5f1351124f554fd4be6476708b"} Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.548064 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.591012 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts\") pod \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.591115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvcms\" (UniqueName: \"kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms\") pod \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.591225 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle\") pod \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.591268 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data\") pod \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\" (UID: \"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e\") " Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.597822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms" (OuterVolumeSpecName: "kube-api-access-tvcms") pod "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" (UID: "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e"). InnerVolumeSpecName "kube-api-access-tvcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.602943 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts" (OuterVolumeSpecName: "scripts") pod "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" (UID: "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.626509 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" (UID: "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.626957 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data" (OuterVolumeSpecName: "config-data") pod "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" (UID: "e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.693613 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.693655 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.693673 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:09 crc kubenswrapper[4687]: I1203 18:01:09.693685 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvcms\" (UniqueName: \"kubernetes.io/projected/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e-kube-api-access-tvcms\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.154060 4687 generic.go:334] "Generic (PLEG): container finished" podID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerID="886aee6ab03fc307da8864eba8b01b46dd6030d060501042b506ebab1837d5ea" exitCode=0 Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.154158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerDied","Data":"886aee6ab03fc307da8864eba8b01b46dd6030d060501042b506ebab1837d5ea"} Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.155747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4npbh" event={"ID":"e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e","Type":"ContainerDied","Data":"f63b17351902caf952388188908a9c75d54c440064b8235ebf7113a5351ea60d"} Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.155781 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63b17351902caf952388188908a9c75d54c440064b8235ebf7113a5351ea60d" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.155805 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4npbh" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.253951 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 18:01:10 crc kubenswrapper[4687]: E1203 18:01:10.254714 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b058710c-db65-4f53-b9b7-2e279672355a" containerName="keystone-cron" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.254832 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b058710c-db65-4f53-b9b7-2e279672355a" containerName="keystone-cron" Dec 03 18:01:10 crc kubenswrapper[4687]: E1203 18:01:10.254932 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" containerName="nova-cell0-conductor-db-sync" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.255015 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" containerName="nova-cell0-conductor-db-sync" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.255367 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" containerName="nova-cell0-conductor-db-sync" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.255488 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b058710c-db65-4f53-b9b7-2e279672355a" containerName="keystone-cron" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.256562 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.260566 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.260723 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-75tc5" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.266344 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.304550 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.304630 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.304767 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45frl\" (UniqueName: \"kubernetes.io/projected/be4907b0-15af-400a-8430-ee3890e80010-kube-api-access-45frl\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.407351 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45frl\" (UniqueName: \"kubernetes.io/projected/be4907b0-15af-400a-8430-ee3890e80010-kube-api-access-45frl\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.407466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.407540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.413465 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.427733 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be4907b0-15af-400a-8430-ee3890e80010-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.450150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45frl\" (UniqueName: \"kubernetes.io/projected/be4907b0-15af-400a-8430-ee3890e80010-kube-api-access-45frl\") pod \"nova-cell0-conductor-0\" (UID: \"be4907b0-15af-400a-8430-ee3890e80010\") " pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:10 crc kubenswrapper[4687]: I1203 18:01:10.574108 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:11 crc kubenswrapper[4687]: I1203 18:01:11.050025 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 03 18:01:11 crc kubenswrapper[4687]: I1203 18:01:11.168744 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be4907b0-15af-400a-8430-ee3890e80010","Type":"ContainerStarted","Data":"cd4ab592be08d9f70918e9d3f60d6cf16b58027f156519265a61b963cdcfa6c9"} Dec 03 18:01:12 crc kubenswrapper[4687]: I1203 18:01:12.183429 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"be4907b0-15af-400a-8430-ee3890e80010","Type":"ContainerStarted","Data":"b1857a0b0f5dca78dd0c4eebedf2f096d63d564da3210da176d50975b85e13a6"} Dec 03 18:01:12 crc kubenswrapper[4687]: I1203 18:01:12.185218 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:12 crc kubenswrapper[4687]: I1203 18:01:12.216067 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.216046421 podStartE2EDuration="2.216046421s" podCreationTimestamp="2025-12-03 18:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:12.209477124 +0000 UTC m=+1305.100172577" watchObservedRunningTime="2025-12-03 18:01:12.216046421 +0000 UTC m=+1305.106741864" Dec 03 18:01:14 crc kubenswrapper[4687]: I1203 18:01:14.111353 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:01:14 crc kubenswrapper[4687]: I1203 18:01:14.111871 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:01:14 crc kubenswrapper[4687]: I1203 18:01:14.111929 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:01:14 crc kubenswrapper[4687]: I1203 18:01:14.112763 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:01:14 crc kubenswrapper[4687]: I1203 18:01:14.112815 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605" gracePeriod=600 Dec 03 18:01:15 crc kubenswrapper[4687]: I1203 18:01:15.217347 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605" exitCode=0 Dec 03 18:01:15 crc kubenswrapper[4687]: I1203 18:01:15.217459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605"} Dec 03 18:01:15 crc kubenswrapper[4687]: I1203 18:01:15.217902 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9"} Dec 03 18:01:15 crc kubenswrapper[4687]: I1203 18:01:15.217933 4687 scope.go:117] "RemoveContainer" containerID="5b5046e7c2fc69da47de778c08a447a041ab0f6ce5bedb54a043d37f682e5a7a" Dec 03 18:01:20 crc kubenswrapper[4687]: I1203 18:01:20.602507 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.063262 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rz5q5"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.064940 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.067790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.068435 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.077393 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rz5q5"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.117341 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwhch\" (UniqueName: \"kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.117504 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.117555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.117606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.219544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.219590 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.219613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.219673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwhch\" (UniqueName: \"kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.229634 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.230260 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.242658 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.261370 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwhch\" (UniqueName: \"kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch\") pod \"nova-cell0-cell-mapping-rz5q5\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.377369 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.379281 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.393569 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.405817 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.426805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.426869 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.426942 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.426976 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d9sf\" (UniqueName: \"kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.454468 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.494188 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.496015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.513790 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.514687 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528371 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4vz\" (UniqueName: \"kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528447 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528499 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528620 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528647 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d9sf\" (UniqueName: \"kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528678 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.528704 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.533838 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.552739 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.554241 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.556381 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.561779 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.573858 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.574986 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.599396 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.610710 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.611885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.614173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d9sf\" (UniqueName: \"kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf\") pod \"nova-api-0\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.615930 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.640854 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.641778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.641885 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.642017 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz72\" (UniqueName: \"kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.642097 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx4vz\" (UniqueName: \"kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.642192 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.642315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.643151 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.647513 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.654130 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.657089 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.658923 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.665187 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx4vz\" (UniqueName: \"kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz\") pod \"nova-metadata-0\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.722093 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.756444 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.757782 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.758255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.758321 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz72\" (UniqueName: \"kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.758422 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.758527 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.758573 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs5fr\" (UniqueName: \"kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.764683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.765348 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.786278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz72\" (UniqueName: \"kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72\") pod \"nova-cell1-novncproxy-0\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs5fr\" (UniqueName: \"kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hzr\" (UniqueName: \"kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859459 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859479 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859530 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859564 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859581 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.859601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.863013 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.863701 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.882092 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs5fr\" (UniqueName: \"kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr\") pod \"nova-scheduler-0\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.949323 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961488 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hzr\" (UniqueName: \"kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961546 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961574 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961639 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961673 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.961709 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.962713 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.964915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.964924 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.964961 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.965514 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.967349 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.979175 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hzr\" (UniqueName: \"kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr\") pod \"dnsmasq-dns-757b4f8459-nmjc2\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:21 crc kubenswrapper[4687]: I1203 18:01:21.988333 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.045394 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.107233 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rz5q5"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.287281 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bttct"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.289211 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.293544 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.293799 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.310507 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.311448 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bttct"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.342843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rz5q5" event={"ID":"920884a6-a7b0-49c6-abe7-2b9a9f8b9835","Type":"ContainerStarted","Data":"bfe77cb5390c011c6bda3951449ae4b67cf9c97dd70ff46bb82d9cb8834fefc5"} Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.475285 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.485606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.485688 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.485718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6wvf\" (UniqueName: \"kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.485828 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.590220 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.590470 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.590548 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.590575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6wvf\" (UniqueName: \"kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.597716 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.604235 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.604484 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.616415 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6wvf\" (UniqueName: \"kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf\") pod \"nova-cell1-conductor-db-sync-bttct\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.666072 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.721513 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:22 crc kubenswrapper[4687]: W1203 18:01:22.762255 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c41e1bd_000c_4939_ac34_fb3476bf68e5.slice/crio-4369b702f553b4223c8bca97b0a98fe03ee85ffe0316b1859680f769f85bf165 WatchSource:0}: Error finding container 4369b702f553b4223c8bca97b0a98fe03ee85ffe0316b1859680f769f85bf165: Status 404 returned error can't find the container with id 4369b702f553b4223c8bca97b0a98fe03ee85ffe0316b1859680f769f85bf165 Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.764235 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:22 crc kubenswrapper[4687]: I1203 18:01:22.778960 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.197856 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bttct"] Dec 03 18:01:23 crc kubenswrapper[4687]: W1203 18:01:23.199768 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod512fe776_5298_42ac_b760_682e3b0d99e5.slice/crio-9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613 WatchSource:0}: Error finding container 9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613: Status 404 returned error can't find the container with id 9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613 Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.363213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerStarted","Data":"eb7110f33e005f64777972811b3c4d150b7652b3ec9f8684d18f4d57cf1ca168"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.369159 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerStarted","Data":"bebb2fb058a704e490a1f7f2bea3c0f6503eef937b7691a48b0063a9f936a911"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.372819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bttct" event={"ID":"512fe776-5298-42ac-b760-682e3b0d99e5","Type":"ContainerStarted","Data":"9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.397418 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd8d8971-3fb3-44ab-bf29-8a0596810a1a","Type":"ContainerStarted","Data":"284e6e37a29be997d01614c01570ddc355df611a32ca5e49935e017579e6487f"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.398673 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rz5q5" event={"ID":"920884a6-a7b0-49c6-abe7-2b9a9f8b9835","Type":"ContainerStarted","Data":"3d914089314d14fa717b485a4a023ea1bc178b893de4981c8b987644fd091245"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.402076 4687 generic.go:334] "Generic (PLEG): container finished" podID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerID="2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3" exitCode=0 Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.402164 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" event={"ID":"2126988c-e607-43c2-b47a-c9935c88fa0b","Type":"ContainerDied","Data":"2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.402191 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" event={"ID":"2126988c-e607-43c2-b47a-c9935c88fa0b","Type":"ContainerStarted","Data":"527c0467988ac6399394b80c87e339f9d122b446be4b3a0f18da352608fb47ff"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.405000 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c41e1bd-000c-4939-ac34-fb3476bf68e5","Type":"ContainerStarted","Data":"4369b702f553b4223c8bca97b0a98fe03ee85ffe0316b1859680f769f85bf165"} Dec 03 18:01:23 crc kubenswrapper[4687]: I1203 18:01:23.460245 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rz5q5" podStartSLOduration=2.46021919 podStartE2EDuration="2.46021919s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:23.427414776 +0000 UTC m=+1316.318110219" watchObservedRunningTime="2025-12-03 18:01:23.46021919 +0000 UTC m=+1316.350914623" Dec 03 18:01:24 crc kubenswrapper[4687]: I1203 18:01:24.419824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bttct" event={"ID":"512fe776-5298-42ac-b760-682e3b0d99e5","Type":"ContainerStarted","Data":"26c75ed11cfbece24255263cff21a228a477e7b19947f19534c75b814e778778"} Dec 03 18:01:24 crc kubenswrapper[4687]: I1203 18:01:24.427030 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" event={"ID":"2126988c-e607-43c2-b47a-c9935c88fa0b","Type":"ContainerStarted","Data":"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500"} Dec 03 18:01:24 crc kubenswrapper[4687]: I1203 18:01:24.427068 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:24 crc kubenswrapper[4687]: I1203 18:01:24.440572 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bttct" podStartSLOduration=2.440555903 podStartE2EDuration="2.440555903s" podCreationTimestamp="2025-12-03 18:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:24.44042894 +0000 UTC m=+1317.331124373" watchObservedRunningTime="2025-12-03 18:01:24.440555903 +0000 UTC m=+1317.331251326" Dec 03 18:01:24 crc kubenswrapper[4687]: I1203 18:01:24.474372 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" podStartSLOduration=3.474353844 podStartE2EDuration="3.474353844s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:24.466078152 +0000 UTC m=+1317.356773595" watchObservedRunningTime="2025-12-03 18:01:24.474353844 +0000 UTC m=+1317.365049277" Dec 03 18:01:25 crc kubenswrapper[4687]: I1203 18:01:25.081559 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:25 crc kubenswrapper[4687]: I1203 18:01:25.090367 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.455836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd8d8971-3fb3-44ab-bf29-8a0596810a1a","Type":"ContainerStarted","Data":"6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.458648 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c41e1bd-000c-4939-ac34-fb3476bf68e5","Type":"ContainerStarted","Data":"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.458968 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900" gracePeriod=30 Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.468639 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerStarted","Data":"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.468754 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerStarted","Data":"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.470920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerStarted","Data":"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.470961 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerStarted","Data":"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380"} Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.471063 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-log" containerID="cri-o://cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" gracePeriod=30 Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.471108 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-metadata" containerID="cri-o://8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" gracePeriod=30 Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.535607 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.769016656 podStartE2EDuration="6.53538363s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="2025-12-03 18:01:22.365294697 +0000 UTC m=+1315.255990130" lastFinishedPulling="2025-12-03 18:01:26.131661671 +0000 UTC m=+1319.022357104" observedRunningTime="2025-12-03 18:01:27.527529038 +0000 UTC m=+1320.418224471" watchObservedRunningTime="2025-12-03 18:01:27.53538363 +0000 UTC m=+1320.426079063" Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.557449 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.187706396 podStartE2EDuration="6.557383203s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="2025-12-03 18:01:22.76518492 +0000 UTC m=+1315.655880353" lastFinishedPulling="2025-12-03 18:01:26.134861727 +0000 UTC m=+1319.025557160" observedRunningTime="2025-12-03 18:01:27.545847112 +0000 UTC m=+1320.436542545" watchObservedRunningTime="2025-12-03 18:01:27.557383203 +0000 UTC m=+1320.448078636" Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.580091 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.941128488 podStartE2EDuration="6.580065035s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="2025-12-03 18:01:22.492928609 +0000 UTC m=+1315.383624042" lastFinishedPulling="2025-12-03 18:01:26.131865156 +0000 UTC m=+1319.022560589" observedRunningTime="2025-12-03 18:01:27.566196381 +0000 UTC m=+1320.456891814" watchObservedRunningTime="2025-12-03 18:01:27.580065035 +0000 UTC m=+1320.470760468" Dec 03 18:01:27 crc kubenswrapper[4687]: I1203 18:01:27.582884 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.124997986 podStartE2EDuration="6.582864091s" podCreationTimestamp="2025-12-03 18:01:21 +0000 UTC" firstStartedPulling="2025-12-03 18:01:22.673995371 +0000 UTC m=+1315.564690804" lastFinishedPulling="2025-12-03 18:01:26.131861476 +0000 UTC m=+1319.022556909" observedRunningTime="2025-12-03 18:01:27.582101269 +0000 UTC m=+1320.472796702" watchObservedRunningTime="2025-12-03 18:01:27.582864091 +0000 UTC m=+1320.473559524" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.115439 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.160110 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data\") pod \"c14eda0d-3dae-4172-b846-354ad79b5803\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.160167 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs\") pod \"c14eda0d-3dae-4172-b846-354ad79b5803\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.160231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx4vz\" (UniqueName: \"kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz\") pod \"c14eda0d-3dae-4172-b846-354ad79b5803\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.160262 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle\") pod \"c14eda0d-3dae-4172-b846-354ad79b5803\" (UID: \"c14eda0d-3dae-4172-b846-354ad79b5803\") " Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.160807 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs" (OuterVolumeSpecName: "logs") pod "c14eda0d-3dae-4172-b846-354ad79b5803" (UID: "c14eda0d-3dae-4172-b846-354ad79b5803"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.161394 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c14eda0d-3dae-4172-b846-354ad79b5803-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.169384 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz" (OuterVolumeSpecName: "kube-api-access-rx4vz") pod "c14eda0d-3dae-4172-b846-354ad79b5803" (UID: "c14eda0d-3dae-4172-b846-354ad79b5803"). InnerVolumeSpecName "kube-api-access-rx4vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.194289 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data" (OuterVolumeSpecName: "config-data") pod "c14eda0d-3dae-4172-b846-354ad79b5803" (UID: "c14eda0d-3dae-4172-b846-354ad79b5803"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.197943 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c14eda0d-3dae-4172-b846-354ad79b5803" (UID: "c14eda0d-3dae-4172-b846-354ad79b5803"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.262948 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.263006 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx4vz\" (UniqueName: \"kubernetes.io/projected/c14eda0d-3dae-4172-b846-354ad79b5803-kube-api-access-rx4vz\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.263022 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14eda0d-3dae-4172-b846-354ad79b5803-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.381174 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481379 4687 generic.go:334] "Generic (PLEG): container finished" podID="c14eda0d-3dae-4172-b846-354ad79b5803" containerID="8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" exitCode=0 Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481408 4687 generic.go:334] "Generic (PLEG): container finished" podID="c14eda0d-3dae-4172-b846-354ad79b5803" containerID="cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" exitCode=143 Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481477 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerDied","Data":"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c"} Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481575 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerDied","Data":"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380"} Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481589 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c14eda0d-3dae-4172-b846-354ad79b5803","Type":"ContainerDied","Data":"bebb2fb058a704e490a1f7f2bea3c0f6503eef937b7691a48b0063a9f936a911"} Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.481606 4687 scope.go:117] "RemoveContainer" containerID="8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.524195 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.524493 4687 scope.go:117] "RemoveContainer" containerID="cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.531421 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.554246 4687 scope.go:117] "RemoveContainer" containerID="8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" Dec 03 18:01:28 crc kubenswrapper[4687]: E1203 18:01:28.558086 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c\": container with ID starting with 8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c not found: ID does not exist" containerID="8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558153 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c"} err="failed to get container status \"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c\": rpc error: code = NotFound desc = could not find container \"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c\": container with ID starting with 8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c not found: ID does not exist" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558178 4687 scope.go:117] "RemoveContainer" containerID="cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" Dec 03 18:01:28 crc kubenswrapper[4687]: E1203 18:01:28.558570 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380\": container with ID starting with cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380 not found: ID does not exist" containerID="cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558619 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380"} err="failed to get container status \"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380\": rpc error: code = NotFound desc = could not find container \"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380\": container with ID starting with cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380 not found: ID does not exist" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558650 4687 scope.go:117] "RemoveContainer" containerID="8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558910 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c"} err="failed to get container status \"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c\": rpc error: code = NotFound desc = could not find container \"8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c\": container with ID starting with 8a84b5eac838fd9a6162f928629c90a83231aba30031a4e2a059aca7f7f4411c not found: ID does not exist" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.558963 4687 scope.go:117] "RemoveContainer" containerID="cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.559375 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380"} err="failed to get container status \"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380\": rpc error: code = NotFound desc = could not find container \"cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380\": container with ID starting with cdcbfcdc98116bfabec8e02298e3db13b29485e4821bbf81fdb0c11fdcc9a380 not found: ID does not exist" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.560426 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:28 crc kubenswrapper[4687]: E1203 18:01:28.560920 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-log" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.560945 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-log" Dec 03 18:01:28 crc kubenswrapper[4687]: E1203 18:01:28.560972 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-metadata" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.560980 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-metadata" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.561435 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-log" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.561466 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" containerName="nova-metadata-metadata" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.564631 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.566255 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.567140 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.602204 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.670294 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.670372 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.670402 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.670555 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzcvm\" (UniqueName: \"kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.670631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.772507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzcvm\" (UniqueName: \"kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.772594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.772699 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.772748 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.772773 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.773419 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.777150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.777734 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.779018 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.790976 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzcvm\" (UniqueName: \"kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm\") pod \"nova-metadata-0\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " pod="openstack/nova-metadata-0" Dec 03 18:01:28 crc kubenswrapper[4687]: I1203 18:01:28.899517 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:29 crc kubenswrapper[4687]: I1203 18:01:29.338837 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:29 crc kubenswrapper[4687]: I1203 18:01:29.419757 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14eda0d-3dae-4172-b846-354ad79b5803" path="/var/lib/kubelet/pods/c14eda0d-3dae-4172-b846-354ad79b5803/volumes" Dec 03 18:01:29 crc kubenswrapper[4687]: I1203 18:01:29.493166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerStarted","Data":"e174718eff3b66aafa8538f56d87b1b6bec0b08d8e137576981a9347ba3aebe4"} Dec 03 18:01:30 crc kubenswrapper[4687]: I1203 18:01:30.512335 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerStarted","Data":"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519"} Dec 03 18:01:30 crc kubenswrapper[4687]: I1203 18:01:30.512677 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerStarted","Data":"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598"} Dec 03 18:01:30 crc kubenswrapper[4687]: I1203 18:01:30.533479 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.533464238 podStartE2EDuration="2.533464238s" podCreationTimestamp="2025-12-03 18:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:30.531035842 +0000 UTC m=+1323.421731275" watchObservedRunningTime="2025-12-03 18:01:30.533464238 +0000 UTC m=+1323.424159671" Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.521684 4687 generic.go:334] "Generic (PLEG): container finished" podID="920884a6-a7b0-49c6-abe7-2b9a9f8b9835" containerID="3d914089314d14fa717b485a4a023ea1bc178b893de4981c8b987644fd091245" exitCode=0 Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.521753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rz5q5" event={"ID":"920884a6-a7b0-49c6-abe7-2b9a9f8b9835","Type":"ContainerDied","Data":"3d914089314d14fa717b485a4a023ea1bc178b893de4981c8b987644fd091245"} Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.523370 4687 generic.go:334] "Generic (PLEG): container finished" podID="512fe776-5298-42ac-b760-682e3b0d99e5" containerID="26c75ed11cfbece24255263cff21a228a477e7b19947f19534c75b814e778778" exitCode=0 Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.523456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bttct" event={"ID":"512fe776-5298-42ac-b760-682e3b0d99e5","Type":"ContainerDied","Data":"26c75ed11cfbece24255263cff21a228a477e7b19947f19534c75b814e778778"} Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.758262 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.758321 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.968112 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.968180 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 18:01:31 crc kubenswrapper[4687]: I1203 18:01:31.989846 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.004766 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.046300 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.196100 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.196470 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="dnsmasq-dns" containerID="cri-o://1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350" gracePeriod=10 Dec 03 18:01:32 crc kubenswrapper[4687]: E1203 18:01:32.334266 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a3b5e3d_5ec7_431b_8c14_fc79b496f9b0.slice/crio-1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350.scope\": RecentStats: unable to find data in memory cache]" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.542227 4687 generic.go:334] "Generic (PLEG): container finished" podID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerID="1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350" exitCode=0 Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.542385 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" event={"ID":"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0","Type":"ContainerDied","Data":"1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350"} Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.579560 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.787961 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.844392 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.844497 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966327 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966645 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5996\" (UniqueName: \"kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966704 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966733 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966803 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.966857 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb\") pod \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\" (UID: \"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0\") " Dec 03 18:01:32 crc kubenswrapper[4687]: I1203 18:01:32.971822 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996" (OuterVolumeSpecName: "kube-api-access-d5996") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "kube-api-access-d5996". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.024898 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.034476 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.046393 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.057678 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config" (OuterVolumeSpecName: "config") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.066892 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.070796 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.070822 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.070831 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5996\" (UniqueName: \"kubernetes.io/projected/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-kube-api-access-d5996\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.070841 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.070851 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.071015 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" (UID: "9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.075542 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.172710 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwhch\" (UniqueName: \"kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch\") pod \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.172766 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts\") pod \"512fe776-5298-42ac-b760-682e3b0d99e5\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.172841 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data\") pod \"512fe776-5298-42ac-b760-682e3b0d99e5\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.172920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6wvf\" (UniqueName: \"kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf\") pod \"512fe776-5298-42ac-b760-682e3b0d99e5\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.173045 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data\") pod \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.173075 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts\") pod \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.173136 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle\") pod \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\" (UID: \"920884a6-a7b0-49c6-abe7-2b9a9f8b9835\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.173174 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle\") pod \"512fe776-5298-42ac-b760-682e3b0d99e5\" (UID: \"512fe776-5298-42ac-b760-682e3b0d99e5\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.173663 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.177234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf" (OuterVolumeSpecName: "kube-api-access-f6wvf") pod "512fe776-5298-42ac-b760-682e3b0d99e5" (UID: "512fe776-5298-42ac-b760-682e3b0d99e5"). InnerVolumeSpecName "kube-api-access-f6wvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.177580 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch" (OuterVolumeSpecName: "kube-api-access-zwhch") pod "920884a6-a7b0-49c6-abe7-2b9a9f8b9835" (UID: "920884a6-a7b0-49c6-abe7-2b9a9f8b9835"). InnerVolumeSpecName "kube-api-access-zwhch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.186584 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts" (OuterVolumeSpecName: "scripts") pod "512fe776-5298-42ac-b760-682e3b0d99e5" (UID: "512fe776-5298-42ac-b760-682e3b0d99e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.191328 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts" (OuterVolumeSpecName: "scripts") pod "920884a6-a7b0-49c6-abe7-2b9a9f8b9835" (UID: "920884a6-a7b0-49c6-abe7-2b9a9f8b9835"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.205459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "512fe776-5298-42ac-b760-682e3b0d99e5" (UID: "512fe776-5298-42ac-b760-682e3b0d99e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.206844 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data" (OuterVolumeSpecName: "config-data") pod "512fe776-5298-42ac-b760-682e3b0d99e5" (UID: "512fe776-5298-42ac-b760-682e3b0d99e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.209970 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data" (OuterVolumeSpecName: "config-data") pod "920884a6-a7b0-49c6-abe7-2b9a9f8b9835" (UID: "920884a6-a7b0-49c6-abe7-2b9a9f8b9835"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.212431 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "920884a6-a7b0-49c6-abe7-2b9a9f8b9835" (UID: "920884a6-a7b0-49c6-abe7-2b9a9f8b9835"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275087 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275138 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275148 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275158 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275168 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwhch\" (UniqueName: \"kubernetes.io/projected/920884a6-a7b0-49c6-abe7-2b9a9f8b9835-kube-api-access-zwhch\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275177 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275185 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/512fe776-5298-42ac-b760-682e3b0d99e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.275194 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6wvf\" (UniqueName: \"kubernetes.io/projected/512fe776-5298-42ac-b760-682e3b0d99e5-kube-api-access-f6wvf\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.555200 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rz5q5" event={"ID":"920884a6-a7b0-49c6-abe7-2b9a9f8b9835","Type":"ContainerDied","Data":"bfe77cb5390c011c6bda3951449ae4b67cf9c97dd70ff46bb82d9cb8834fefc5"} Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.555560 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfe77cb5390c011c6bda3951449ae4b67cf9c97dd70ff46bb82d9cb8834fefc5" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.555229 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rz5q5" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.558204 4687 generic.go:334] "Generic (PLEG): container finished" podID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerID="8007580a35839ec827db1fa2ef5a318eca63581ca4205b1d4e2c75d5b3a91650" exitCode=137 Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.558292 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerDied","Data":"8007580a35839ec827db1fa2ef5a318eca63581ca4205b1d4e2c75d5b3a91650"} Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.558460 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ae688d1-9d48-4692-8167-edcbaa1e98b7","Type":"ContainerDied","Data":"ce99d72c367972fa32cada10dd31667f97ec6717371d81b040973809dc71a0d7"} Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.558492 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce99d72c367972fa32cada10dd31667f97ec6717371d81b040973809dc71a0d7" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.559396 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.564436 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" event={"ID":"9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0","Type":"ContainerDied","Data":"a8b9a1214a149a7e9f5bf88340ecdc2841fb782113f2493e96cdc8a37f26c221"} Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.564494 4687 scope.go:117] "RemoveContainer" containerID="1769f2e71fbf9f6832492d5c5072ca64b8903c6fc541c8f086561035853d1350" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.564663 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-2d9p4" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.571437 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bttct" event={"ID":"512fe776-5298-42ac-b760-682e3b0d99e5","Type":"ContainerDied","Data":"9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613"} Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.571475 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bttct" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.571487 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f7f4ada8dfa13b4123017b2a9256540ac8f2045f57e460627c97d241974e613" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.584958 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqf46\" (UniqueName: \"kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585065 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585164 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585204 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585226 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.585254 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts\") pod \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\" (UID: \"6ae688d1-9d48-4692-8167-edcbaa1e98b7\") " Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.591410 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46" (OuterVolumeSpecName: "kube-api-access-bqf46") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "kube-api-access-bqf46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.593432 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts" (OuterVolumeSpecName: "scripts") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.592808 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.594130 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.614239 4687 scope.go:117] "RemoveContainer" containerID="ddb580e846e5a6b3e6a9eb53ca4294faa2d71ee0db2ef8ef69064654e304bd83" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.687095 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.687149 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ae688d1-9d48-4692-8167-edcbaa1e98b7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.687161 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.687174 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqf46\" (UniqueName: \"kubernetes.io/projected/6ae688d1-9d48-4692-8167-edcbaa1e98b7-kube-api-access-bqf46\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.702859 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.714981 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.724500 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-2d9p4"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.733335 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.733973 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="proxy-httpd" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734063 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="proxy-httpd" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734141 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="dnsmasq-dns" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734233 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="dnsmasq-dns" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734339 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="sg-core" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734419 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="sg-core" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734486 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-central-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734539 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-central-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734602 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512fe776-5298-42ac-b760-682e3b0d99e5" containerName="nova-cell1-conductor-db-sync" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734660 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="512fe776-5298-42ac-b760-682e3b0d99e5" containerName="nova-cell1-conductor-db-sync" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734723 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-notification-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734774 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-notification-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734835 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920884a6-a7b0-49c6-abe7-2b9a9f8b9835" containerName="nova-manage" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.734884 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="920884a6-a7b0-49c6-abe7-2b9a9f8b9835" containerName="nova-manage" Dec 03 18:01:33 crc kubenswrapper[4687]: E1203 18:01:33.734950 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="init" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735009 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="init" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735407 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-central-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735485 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" containerName="dnsmasq-dns" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735546 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="512fe776-5298-42ac-b760-682e3b0d99e5" containerName="nova-cell1-conductor-db-sync" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735619 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="sg-core" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735704 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="ceilometer-notification-agent" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735770 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" containerName="proxy-httpd" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.735828 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="920884a6-a7b0-49c6-abe7-2b9a9f8b9835" containerName="nova-manage" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.736520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.739378 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.744331 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.792309 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.798277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.800571 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data" (OuterVolumeSpecName: "config-data") pod "6ae688d1-9d48-4692-8167-edcbaa1e98b7" (UID: "6ae688d1-9d48-4692-8167-edcbaa1e98b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.814787 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.815340 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-log" containerID="cri-o://78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba" gracePeriod=30 Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.816489 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-api" containerID="cri-o://a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8" gracePeriod=30 Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.841155 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.841447 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-log" containerID="cri-o://2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" gracePeriod=30 Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.841912 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-metadata" containerID="cri-o://b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" gracePeriod=30 Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.864376 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.894234 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.894611 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjdmg\" (UniqueName: \"kubernetes.io/projected/d6d72bd8-fd40-4856-96ee-f753ba4c170b-kube-api-access-qjdmg\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.894727 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.895091 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.895153 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ae688d1-9d48-4692-8167-edcbaa1e98b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.900272 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:01:33 crc kubenswrapper[4687]: I1203 18:01:33.900433 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.002023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.002328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.002372 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjdmg\" (UniqueName: \"kubernetes.io/projected/d6d72bd8-fd40-4856-96ee-f753ba4c170b-kube-api-access-qjdmg\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.008278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.017032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d72bd8-fd40-4856-96ee-f753ba4c170b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.033059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjdmg\" (UniqueName: \"kubernetes.io/projected/d6d72bd8-fd40-4856-96ee-f753ba4c170b-kube-api-access-qjdmg\") pod \"nova-cell1-conductor-0\" (UID: \"d6d72bd8-fd40-4856-96ee-f753ba4c170b\") " pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.068876 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.456989 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.600755 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.619816 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs\") pod \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.619907 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data\") pod \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.619967 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle\") pod \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.620017 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs\") pod \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.620086 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzcvm\" (UniqueName: \"kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm\") pod \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\" (UID: \"97da68f7-fc98-49c1-b4e0-cb3f52be0a51\") " Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.633369 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm" (OuterVolumeSpecName: "kube-api-access-nzcvm") pod "97da68f7-fc98-49c1-b4e0-cb3f52be0a51" (UID: "97da68f7-fc98-49c1-b4e0-cb3f52be0a51"). InnerVolumeSpecName "kube-api-access-nzcvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.633789 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs" (OuterVolumeSpecName: "logs") pod "97da68f7-fc98-49c1-b4e0-cb3f52be0a51" (UID: "97da68f7-fc98-49c1-b4e0-cb3f52be0a51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665262 4687 generic.go:334] "Generic (PLEG): container finished" podID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerID="b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" exitCode=0 Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665300 4687 generic.go:334] "Generic (PLEG): container finished" podID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerID="2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" exitCode=143 Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665355 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerDied","Data":"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519"} Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665364 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665386 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerDied","Data":"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598"} Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665398 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"97da68f7-fc98-49c1-b4e0-cb3f52be0a51","Type":"ContainerDied","Data":"e174718eff3b66aafa8538f56d87b1b6bec0b08d8e137576981a9347ba3aebe4"} Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.665415 4687 scope.go:117] "RemoveContainer" containerID="b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.675312 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97da68f7-fc98-49c1-b4e0-cb3f52be0a51" (UID: "97da68f7-fc98-49c1-b4e0-cb3f52be0a51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.694171 4687 generic.go:334] "Generic (PLEG): container finished" podID="6e57b639-c060-44f7-88f5-810fe6779351" containerID="78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba" exitCode=143 Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.694264 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerDied","Data":"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba"} Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.701492 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data" (OuterVolumeSpecName: "config-data") pod "97da68f7-fc98-49c1-b4e0-cb3f52be0a51" (UID: "97da68f7-fc98-49c1-b4e0-cb3f52be0a51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.709571 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerName="nova-scheduler-scheduler" containerID="cri-o://6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" gracePeriod=30 Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.709944 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.722432 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.722470 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.722483 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.722496 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzcvm\" (UniqueName: \"kubernetes.io/projected/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-kube-api-access-nzcvm\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.769371 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "97da68f7-fc98-49c1-b4e0-cb3f52be0a51" (UID: "97da68f7-fc98-49c1-b4e0-cb3f52be0a51"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.827640 4687 scope.go:117] "RemoveContainer" containerID="2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.828566 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/97da68f7-fc98-49c1-b4e0-cb3f52be0a51-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.861337 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.870595 4687 scope.go:117] "RemoveContainer" containerID="b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" Dec 03 18:01:34 crc kubenswrapper[4687]: E1203 18:01:34.873571 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519\": container with ID starting with b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519 not found: ID does not exist" containerID="b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.873631 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519"} err="failed to get container status \"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519\": rpc error: code = NotFound desc = could not find container \"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519\": container with ID starting with b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519 not found: ID does not exist" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.873663 4687 scope.go:117] "RemoveContainer" containerID="2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.877023 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:01:34 crc kubenswrapper[4687]: E1203 18:01:34.877175 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598\": container with ID starting with 2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598 not found: ID does not exist" containerID="2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.877209 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598"} err="failed to get container status \"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598\": rpc error: code = NotFound desc = could not find container \"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598\": container with ID starting with 2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598 not found: ID does not exist" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.877239 4687 scope.go:117] "RemoveContainer" containerID="b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.877720 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519"} err="failed to get container status \"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519\": rpc error: code = NotFound desc = could not find container \"b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519\": container with ID starting with b9e215c8ac736b7f48b4b49f9afb63c38a722fa9ed31c2cad10440a5e3afd519 not found: ID does not exist" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.877757 4687 scope.go:117] "RemoveContainer" containerID="2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.879404 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598"} err="failed to get container status \"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598\": rpc error: code = NotFound desc = could not find container \"2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598\": container with ID starting with 2d7ca2d8507617c9002e574a33e7dfb768843a75af8f84e8779642c2f44c9598 not found: ID does not exist" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.891894 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:01:34 crc kubenswrapper[4687]: E1203 18:01:34.893829 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-metadata" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.893867 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-metadata" Dec 03 18:01:34 crc kubenswrapper[4687]: E1203 18:01:34.893879 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-log" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.893885 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-log" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.894073 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-log" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.894086 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" containerName="nova-metadata-metadata" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.908106 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.908222 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.910700 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:01:34 crc kubenswrapper[4687]: I1203 18:01:34.913620 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.007634 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.015863 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.025889 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.027526 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.030584 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.030816 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.034753 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.034852 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlnz\" (UniqueName: \"kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.034881 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.034935 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.035063 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.035200 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.035254 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.038770 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.137173 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.137220 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.137264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140517 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140641 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140693 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qk5\" (UniqueName: \"kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140737 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftlnz\" (UniqueName: \"kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140773 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.140912 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.145949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.146490 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.147828 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.148199 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.149625 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.161940 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftlnz\" (UniqueName: \"kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz\") pod \"ceilometer-0\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.238744 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.245535 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.245580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86qk5\" (UniqueName: \"kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.245641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.245696 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.245829 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.246335 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.250993 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.251037 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.251110 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.274358 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86qk5\" (UniqueName: \"kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5\") pod \"nova-metadata-0\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.346846 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.426840 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ae688d1-9d48-4692-8167-edcbaa1e98b7" path="/var/lib/kubelet/pods/6ae688d1-9d48-4692-8167-edcbaa1e98b7/volumes" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.427754 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97da68f7-fc98-49c1-b4e0-cb3f52be0a51" path="/var/lib/kubelet/pods/97da68f7-fc98-49c1-b4e0-cb3f52be0a51/volumes" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.428460 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0" path="/var/lib/kubelet/pods/9a3b5e3d-5ec7-431b-8c14-fc79b496f9b0/volumes" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.707826 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.723245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerStarted","Data":"1af14a07a78b5cc6d9910a8cf9b648d451fa513b76f38c1a223361a56f06ef2b"} Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.724916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6d72bd8-fd40-4856-96ee-f753ba4c170b","Type":"ContainerStarted","Data":"d13e85aade6ee575afd334e3e9967a76649298953423c31af4903aaa88b58a50"} Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.725024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d6d72bd8-fd40-4856-96ee-f753ba4c170b","Type":"ContainerStarted","Data":"8f0746e10244468f49fe36e08a40c3e7905e5d66802c7bed810cd837d911cf56"} Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.725180 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.840024 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.839993699 podStartE2EDuration="2.839993699s" podCreationTimestamp="2025-12-03 18:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:35.74768077 +0000 UTC m=+1328.638376213" watchObservedRunningTime="2025-12-03 18:01:35.839993699 +0000 UTC m=+1328.730689132" Dec 03 18:01:35 crc kubenswrapper[4687]: W1203 18:01:35.843613 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00bec5d1_2b13_41e8_8204_d0aff2afc9d2.slice/crio-88386ad4a54b803853bdf840a0451d67080b804c0f89bc3279de2f29bd0e5ef4 WatchSource:0}: Error finding container 88386ad4a54b803853bdf840a0451d67080b804c0f89bc3279de2f29bd0e5ef4: Status 404 returned error can't find the container with id 88386ad4a54b803853bdf840a0451d67080b804c0f89bc3279de2f29bd0e5ef4 Dec 03 18:01:35 crc kubenswrapper[4687]: I1203 18:01:35.850743 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:01:36 crc kubenswrapper[4687]: I1203 18:01:36.738206 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerStarted","Data":"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b"} Dec 03 18:01:36 crc kubenswrapper[4687]: I1203 18:01:36.740448 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerStarted","Data":"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278"} Dec 03 18:01:36 crc kubenswrapper[4687]: I1203 18:01:36.740583 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerStarted","Data":"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017"} Dec 03 18:01:36 crc kubenswrapper[4687]: I1203 18:01:36.740683 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerStarted","Data":"88386ad4a54b803853bdf840a0451d67080b804c0f89bc3279de2f29bd0e5ef4"} Dec 03 18:01:36 crc kubenswrapper[4687]: I1203 18:01:36.763569 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.7635457620000001 podStartE2EDuration="1.763545762s" podCreationTimestamp="2025-12-03 18:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:36.760142869 +0000 UTC m=+1329.650838302" watchObservedRunningTime="2025-12-03 18:01:36.763545762 +0000 UTC m=+1329.654241195" Dec 03 18:01:36 crc kubenswrapper[4687]: E1203 18:01:36.969398 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 18:01:36 crc kubenswrapper[4687]: E1203 18:01:36.971955 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 18:01:36 crc kubenswrapper[4687]: E1203 18:01:36.978165 4687 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 03 18:01:36 crc kubenswrapper[4687]: E1203 18:01:36.978287 4687 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerName="nova-scheduler-scheduler" Dec 03 18:01:37 crc kubenswrapper[4687]: I1203 18:01:37.752288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerStarted","Data":"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819"} Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.102267 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.224249 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.331938 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle\") pod \"6e57b639-c060-44f7-88f5-810fe6779351\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.332212 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data\") pod \"6e57b639-c060-44f7-88f5-810fe6779351\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.332310 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d9sf\" (UniqueName: \"kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf\") pod \"6e57b639-c060-44f7-88f5-810fe6779351\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.332368 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs\") pod \"6e57b639-c060-44f7-88f5-810fe6779351\" (UID: \"6e57b639-c060-44f7-88f5-810fe6779351\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.333740 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs" (OuterVolumeSpecName: "logs") pod "6e57b639-c060-44f7-88f5-810fe6779351" (UID: "6e57b639-c060-44f7-88f5-810fe6779351"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.346766 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf" (OuterVolumeSpecName: "kube-api-access-5d9sf") pod "6e57b639-c060-44f7-88f5-810fe6779351" (UID: "6e57b639-c060-44f7-88f5-810fe6779351"). InnerVolumeSpecName "kube-api-access-5d9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.369133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data" (OuterVolumeSpecName: "config-data") pod "6e57b639-c060-44f7-88f5-810fe6779351" (UID: "6e57b639-c060-44f7-88f5-810fe6779351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.369884 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e57b639-c060-44f7-88f5-810fe6779351" (UID: "6e57b639-c060-44f7-88f5-810fe6779351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.440336 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.440410 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e57b639-c060-44f7-88f5-810fe6779351-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.440446 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d9sf\" (UniqueName: \"kubernetes.io/projected/6e57b639-c060-44f7-88f5-810fe6779351-kube-api-access-5d9sf\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.440470 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e57b639-c060-44f7-88f5-810fe6779351-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.780695 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerStarted","Data":"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176"} Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.782596 4687 generic.go:334] "Generic (PLEG): container finished" podID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerID="6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" exitCode=0 Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.782700 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd8d8971-3fb3-44ab-bf29-8a0596810a1a","Type":"ContainerDied","Data":"6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe"} Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.785664 4687 generic.go:334] "Generic (PLEG): container finished" podID="6e57b639-c060-44f7-88f5-810fe6779351" containerID="a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8" exitCode=0 Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.785709 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerDied","Data":"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8"} Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.785736 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6e57b639-c060-44f7-88f5-810fe6779351","Type":"ContainerDied","Data":"eb7110f33e005f64777972811b3c4d150b7652b3ec9f8684d18f4d57cf1ca168"} Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.785756 4687 scope.go:117] "RemoveContainer" containerID="a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.785890 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.822404 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.840684 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.840941 4687 scope.go:117] "RemoveContainer" containerID="78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.852726 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881246 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:39 crc kubenswrapper[4687]: E1203 18:01:39.881686 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-api" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881705 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-api" Dec 03 18:01:39 crc kubenswrapper[4687]: E1203 18:01:39.881713 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerName="nova-scheduler-scheduler" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881730 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerName="nova-scheduler-scheduler" Dec 03 18:01:39 crc kubenswrapper[4687]: E1203 18:01:39.881759 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-log" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881767 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-log" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881948 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-api" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881961 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" containerName="nova-scheduler-scheduler" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.881972 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e57b639-c060-44f7-88f5-810fe6779351" containerName="nova-api-log" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.882991 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.885528 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.900036 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.945301 4687 scope.go:117] "RemoveContainer" containerID="a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8" Dec 03 18:01:39 crc kubenswrapper[4687]: E1203 18:01:39.946153 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8\": container with ID starting with a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8 not found: ID does not exist" containerID="a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.946202 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8"} err="failed to get container status \"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8\": rpc error: code = NotFound desc = could not find container \"a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8\": container with ID starting with a1977a401ef3254f308486592c629a261c51acd6be115c6a4b4a9890161bada8 not found: ID does not exist" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.946230 4687 scope.go:117] "RemoveContainer" containerID="78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba" Dec 03 18:01:39 crc kubenswrapper[4687]: E1203 18:01:39.946624 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba\": container with ID starting with 78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba not found: ID does not exist" containerID="78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.946673 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba"} err="failed to get container status \"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba\": rpc error: code = NotFound desc = could not find container \"78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba\": container with ID starting with 78025f14acd7bae461b57c686a76971f7698e41f25ad9bd6a5e038b4f7f69dba not found: ID does not exist" Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.970198 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs5fr\" (UniqueName: \"kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr\") pod \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.970360 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle\") pod \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.970387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data\") pod \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\" (UID: \"fd8d8971-3fb3-44ab-bf29-8a0596810a1a\") " Dec 03 18:01:39 crc kubenswrapper[4687]: I1203 18:01:39.976600 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr" (OuterVolumeSpecName: "kube-api-access-hs5fr") pod "fd8d8971-3fb3-44ab-bf29-8a0596810a1a" (UID: "fd8d8971-3fb3-44ab-bf29-8a0596810a1a"). InnerVolumeSpecName "kube-api-access-hs5fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.004531 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data" (OuterVolumeSpecName: "config-data") pod "fd8d8971-3fb3-44ab-bf29-8a0596810a1a" (UID: "fd8d8971-3fb3-44ab-bf29-8a0596810a1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.013566 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd8d8971-3fb3-44ab-bf29-8a0596810a1a" (UID: "fd8d8971-3fb3-44ab-bf29-8a0596810a1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072240 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072315 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtgg\" (UniqueName: \"kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072355 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072392 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072480 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs5fr\" (UniqueName: \"kubernetes.io/projected/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-kube-api-access-hs5fr\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072504 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.072516 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8d8971-3fb3-44ab-bf29-8a0596810a1a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.173882 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.174330 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtgg\" (UniqueName: \"kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.174365 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.174391 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.174278 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.179486 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.180607 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.190834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtgg\" (UniqueName: \"kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg\") pod \"nova-api-0\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.233189 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.348341 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.349257 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.760513 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:01:40 crc kubenswrapper[4687]: W1203 18:01:40.764054 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4ea4641_87b6_4232_8211_aa0e20aa6f5f.slice/crio-8d48b0f2464e04697e919ac5e208cb49d55c4d563274eed4b9ef54c6ef4272cd WatchSource:0}: Error finding container 8d48b0f2464e04697e919ac5e208cb49d55c4d563274eed4b9ef54c6ef4272cd: Status 404 returned error can't find the container with id 8d48b0f2464e04697e919ac5e208cb49d55c4d563274eed4b9ef54c6ef4272cd Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.800870 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerStarted","Data":"8d48b0f2464e04697e919ac5e208cb49d55c4d563274eed4b9ef54c6ef4272cd"} Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.803047 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fd8d8971-3fb3-44ab-bf29-8a0596810a1a","Type":"ContainerDied","Data":"284e6e37a29be997d01614c01570ddc355df611a32ca5e49935e017579e6487f"} Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.803101 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.803216 4687 scope.go:117] "RemoveContainer" containerID="6b07f7c2ae6fd5bb8c702a6edc4c2f22add5b0cb64f8ed429e420c5bc7fe9afe" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.816718 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerStarted","Data":"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f"} Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.841437 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.550125727 podStartE2EDuration="6.841410194s" podCreationTimestamp="2025-12-03 18:01:34 +0000 UTC" firstStartedPulling="2025-12-03 18:01:35.703733436 +0000 UTC m=+1328.594428889" lastFinishedPulling="2025-12-03 18:01:39.995017923 +0000 UTC m=+1332.885713356" observedRunningTime="2025-12-03 18:01:40.833856211 +0000 UTC m=+1333.724551644" watchObservedRunningTime="2025-12-03 18:01:40.841410194 +0000 UTC m=+1333.732105627" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.881155 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.895766 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.906206 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.907828 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.910619 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 18:01:40 crc kubenswrapper[4687]: I1203 18:01:40.914389 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.092268 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.092324 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.092351 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srckx\" (UniqueName: \"kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.195634 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.197859 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.198003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srckx\" (UniqueName: \"kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.201569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.202467 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.218955 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srckx\" (UniqueName: \"kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx\") pod \"nova-scheduler-0\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.239888 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.419985 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e57b639-c060-44f7-88f5-810fe6779351" path="/var/lib/kubelet/pods/6e57b639-c060-44f7-88f5-810fe6779351/volumes" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.420983 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8d8971-3fb3-44ab-bf29-8a0596810a1a" path="/var/lib/kubelet/pods/fd8d8971-3fb3-44ab-bf29-8a0596810a1a/volumes" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.679792 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:01:41 crc kubenswrapper[4687]: W1203 18:01:41.690577 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc519674_30e5_4d39_a64a_8f483b144211.slice/crio-645543121ccc2f487a1617e4a705fbe2c9e0149ee668b981f161e1cd3da2c518 WatchSource:0}: Error finding container 645543121ccc2f487a1617e4a705fbe2c9e0149ee668b981f161e1cd3da2c518: Status 404 returned error can't find the container with id 645543121ccc2f487a1617e4a705fbe2c9e0149ee668b981f161e1cd3da2c518 Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.846779 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerStarted","Data":"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3"} Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.846827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerStarted","Data":"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730"} Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.849282 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc519674-30e5-4d39-a64a-8f483b144211","Type":"ContainerStarted","Data":"645543121ccc2f487a1617e4a705fbe2c9e0149ee668b981f161e1cd3da2c518"} Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.849976 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:01:41 crc kubenswrapper[4687]: I1203 18:01:41.870054 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.87003632 podStartE2EDuration="2.87003632s" podCreationTimestamp="2025-12-03 18:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:41.862065884 +0000 UTC m=+1334.752761317" watchObservedRunningTime="2025-12-03 18:01:41.87003632 +0000 UTC m=+1334.760731753" Dec 03 18:01:42 crc kubenswrapper[4687]: I1203 18:01:42.861237 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc519674-30e5-4d39-a64a-8f483b144211","Type":"ContainerStarted","Data":"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2"} Dec 03 18:01:42 crc kubenswrapper[4687]: I1203 18:01:42.880379 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.8803608499999998 podStartE2EDuration="2.88036085s" podCreationTimestamp="2025-12-03 18:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:01:42.877280508 +0000 UTC m=+1335.767975941" watchObservedRunningTime="2025-12-03 18:01:42.88036085 +0000 UTC m=+1335.771056283" Dec 03 18:01:45 crc kubenswrapper[4687]: I1203 18:01:45.347907 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 18:01:45 crc kubenswrapper[4687]: I1203 18:01:45.348561 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 18:01:46 crc kubenswrapper[4687]: I1203 18:01:46.240961 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 18:01:46 crc kubenswrapper[4687]: I1203 18:01:46.370325 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:46 crc kubenswrapper[4687]: I1203 18:01:46.370396 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:50 crc kubenswrapper[4687]: I1203 18:01:50.233865 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:01:50 crc kubenswrapper[4687]: I1203 18:01:50.235431 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:01:51 crc kubenswrapper[4687]: I1203 18:01:51.240869 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 18:01:51 crc kubenswrapper[4687]: I1203 18:01:51.273020 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 18:01:51 crc kubenswrapper[4687]: I1203 18:01:51.316341 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:51 crc kubenswrapper[4687]: I1203 18:01:51.316381 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 18:01:52 crc kubenswrapper[4687]: I1203 18:01:52.012552 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 18:01:55 crc kubenswrapper[4687]: I1203 18:01:55.354659 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 18:01:55 crc kubenswrapper[4687]: I1203 18:01:55.356184 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 18:01:55 crc kubenswrapper[4687]: I1203 18:01:55.362087 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 18:01:55 crc kubenswrapper[4687]: I1203 18:01:55.362208 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 18:01:57 crc kubenswrapper[4687]: I1203 18:01:57.910569 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.026934 4687 generic.go:334] "Generic (PLEG): container finished" podID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" containerID="56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900" exitCode=137 Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.026991 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c41e1bd-000c-4939-ac34-fb3476bf68e5","Type":"ContainerDied","Data":"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900"} Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.027024 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1c41e1bd-000c-4939-ac34-fb3476bf68e5","Type":"ContainerDied","Data":"4369b702f553b4223c8bca97b0a98fe03ee85ffe0316b1859680f769f85bf165"} Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.027025 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.027044 4687 scope.go:117] "RemoveContainer" containerID="56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.044626 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data\") pod \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.045066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whz72\" (UniqueName: \"kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72\") pod \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.045224 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle\") pod \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\" (UID: \"1c41e1bd-000c-4939-ac34-fb3476bf68e5\") " Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.054130 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72" (OuterVolumeSpecName: "kube-api-access-whz72") pod "1c41e1bd-000c-4939-ac34-fb3476bf68e5" (UID: "1c41e1bd-000c-4939-ac34-fb3476bf68e5"). InnerVolumeSpecName "kube-api-access-whz72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.057057 4687 scope.go:117] "RemoveContainer" containerID="56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900" Dec 03 18:01:58 crc kubenswrapper[4687]: E1203 18:01:58.057603 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900\": container with ID starting with 56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900 not found: ID does not exist" containerID="56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.057715 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900"} err="failed to get container status \"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900\": rpc error: code = NotFound desc = could not find container \"56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900\": container with ID starting with 56d7a7942f7d70950bd73e61bf1e43946838c0ff8e77a0956c4171c35e9d3900 not found: ID does not exist" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.078725 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c41e1bd-000c-4939-ac34-fb3476bf68e5" (UID: "1c41e1bd-000c-4939-ac34-fb3476bf68e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.092398 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data" (OuterVolumeSpecName: "config-data") pod "1c41e1bd-000c-4939-ac34-fb3476bf68e5" (UID: "1c41e1bd-000c-4939-ac34-fb3476bf68e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.147294 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.147505 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whz72\" (UniqueName: \"kubernetes.io/projected/1c41e1bd-000c-4939-ac34-fb3476bf68e5-kube-api-access-whz72\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.147569 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c41e1bd-000c-4939-ac34-fb3476bf68e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.371438 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.387178 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.402348 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:58 crc kubenswrapper[4687]: E1203 18:01:58.402841 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.402868 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.403145 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" containerName="nova-cell1-novncproxy-novncproxy" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.403923 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.406991 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.407363 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.409163 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.411586 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.562873 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.563241 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.563398 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.563564 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.563677 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pxbj\" (UniqueName: \"kubernetes.io/projected/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-kube-api-access-7pxbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.665027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.665141 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.665170 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pxbj\" (UniqueName: \"kubernetes.io/projected/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-kube-api-access-7pxbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.665253 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.665287 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.670949 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.671388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.671615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.685819 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.701697 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pxbj\" (UniqueName: \"kubernetes.io/projected/c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90-kube-api-access-7pxbj\") pod \"nova-cell1-novncproxy-0\" (UID: \"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90\") " pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:58 crc kubenswrapper[4687]: I1203 18:01:58.761336 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:01:59 crc kubenswrapper[4687]: I1203 18:01:59.196146 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 03 18:01:59 crc kubenswrapper[4687]: W1203 18:01:59.196670 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b5f7c9_5d07_41ea_8c3b_3e23a3215c90.slice/crio-b5da66b67e8b3b787cba10563986fa97c6b5e9709744424de0375b3bc7e1aa13 WatchSource:0}: Error finding container b5da66b67e8b3b787cba10563986fa97c6b5e9709744424de0375b3bc7e1aa13: Status 404 returned error can't find the container with id b5da66b67e8b3b787cba10563986fa97c6b5e9709744424de0375b3bc7e1aa13 Dec 03 18:01:59 crc kubenswrapper[4687]: I1203 18:01:59.418644 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c41e1bd-000c-4939-ac34-fb3476bf68e5" path="/var/lib/kubelet/pods/1c41e1bd-000c-4939-ac34-fb3476bf68e5/volumes" Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.048616 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90","Type":"ContainerStarted","Data":"92b194161db6761298d08dafe47b2d5225e6badf77e60de66dda1759f5432f01"} Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.048786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90","Type":"ContainerStarted","Data":"b5da66b67e8b3b787cba10563986fa97c6b5e9709744424de0375b3bc7e1aa13"} Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.075343 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.075324833 podStartE2EDuration="2.075324833s" podCreationTimestamp="2025-12-03 18:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:00.065375095 +0000 UTC m=+1352.956070538" watchObservedRunningTime="2025-12-03 18:02:00.075324833 +0000 UTC m=+1352.966020266" Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.237800 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.238397 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.240873 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 18:02:00 crc kubenswrapper[4687]: I1203 18:02:00.241573 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.064628 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.069324 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.299164 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.314898 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.319657 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421073 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421191 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gpxj\" (UniqueName: \"kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421224 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421292 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421416 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.421484 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523764 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523869 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523893 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gpxj\" (UniqueName: \"kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523908 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.523940 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.524843 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.525060 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.525436 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.526214 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.527879 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.549251 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gpxj\" (UniqueName: \"kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj\") pod \"dnsmasq-dns-89c5cd4d5-jr2rp\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:01 crc kubenswrapper[4687]: I1203 18:02:01.639282 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:02 crc kubenswrapper[4687]: I1203 18:02:02.132317 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.086183 4687 generic.go:334] "Generic (PLEG): container finished" podID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerID="1f008fa1ba4a11ac3c415a58884fad052498627ef9d69cfa7c42592fe29da64a" exitCode=0 Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.086333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" event={"ID":"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba","Type":"ContainerDied","Data":"1f008fa1ba4a11ac3c415a58884fad052498627ef9d69cfa7c42592fe29da64a"} Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.086931 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" event={"ID":"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba","Type":"ContainerStarted","Data":"fd0a4dad4e1afa60a655ef3cb03a009ed795f8dce0c3d61c429ffc0bf701210f"} Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.428234 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.428746 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-central-agent" containerID="cri-o://ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b" gracePeriod=30 Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.428863 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" containerID="cri-o://332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f" gracePeriod=30 Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.428905 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="sg-core" containerID="cri-o://32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176" gracePeriod=30 Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.428939 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-notification-agent" containerID="cri-o://0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819" gracePeriod=30 Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.442704 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.194:3000/\": EOF" Dec 03 18:02:03 crc kubenswrapper[4687]: I1203 18:02:03.761630 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.082057 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101708 4687 generic.go:334] "Generic (PLEG): container finished" podID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerID="332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f" exitCode=0 Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101740 4687 generic.go:334] "Generic (PLEG): container finished" podID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerID="32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176" exitCode=2 Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101748 4687 generic.go:334] "Generic (PLEG): container finished" podID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerID="ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b" exitCode=0 Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101792 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerDied","Data":"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f"} Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerDied","Data":"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176"} Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.101829 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerDied","Data":"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b"} Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.106530 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-log" containerID="cri-o://5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730" gracePeriod=30 Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.107279 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" event={"ID":"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba","Type":"ContainerStarted","Data":"1d82834b7d62075219bc3b8298abe804b32ef5397a7ada4229b8eada8623caa5"} Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.107317 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-api" containerID="cri-o://7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3" gracePeriod=30 Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.107334 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:04 crc kubenswrapper[4687]: I1203 18:02:04.136100 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" podStartSLOduration=3.136080514 podStartE2EDuration="3.136080514s" podCreationTimestamp="2025-12-03 18:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:04.130934256 +0000 UTC m=+1357.021629699" watchObservedRunningTime="2025-12-03 18:02:04.136080514 +0000 UTC m=+1357.026775957" Dec 03 18:02:05 crc kubenswrapper[4687]: I1203 18:02:05.116741 4687 generic.go:334] "Generic (PLEG): container finished" podID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerID="5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730" exitCode=143 Dec 03 18:02:05 crc kubenswrapper[4687]: I1203 18:02:05.116786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerDied","Data":"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730"} Dec 03 18:02:05 crc kubenswrapper[4687]: I1203 18:02:05.245336 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.194:3000/\": dial tcp 10.217.0.194:3000: connect: connection refused" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.690317 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.851471 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle\") pod \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.851659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data\") pod \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.851763 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtgg\" (UniqueName: \"kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg\") pod \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.851840 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs\") pod \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\" (UID: \"a4ea4641-87b6-4232-8211-aa0e20aa6f5f\") " Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.854829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs" (OuterVolumeSpecName: "logs") pod "a4ea4641-87b6-4232-8211-aa0e20aa6f5f" (UID: "a4ea4641-87b6-4232-8211-aa0e20aa6f5f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.859860 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg" (OuterVolumeSpecName: "kube-api-access-cwtgg") pod "a4ea4641-87b6-4232-8211-aa0e20aa6f5f" (UID: "a4ea4641-87b6-4232-8211-aa0e20aa6f5f"). InnerVolumeSpecName "kube-api-access-cwtgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.888171 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data" (OuterVolumeSpecName: "config-data") pod "a4ea4641-87b6-4232-8211-aa0e20aa6f5f" (UID: "a4ea4641-87b6-4232-8211-aa0e20aa6f5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.896647 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4ea4641-87b6-4232-8211-aa0e20aa6f5f" (UID: "a4ea4641-87b6-4232-8211-aa0e20aa6f5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.954478 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtgg\" (UniqueName: \"kubernetes.io/projected/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-kube-api-access-cwtgg\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.954507 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.954517 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:07 crc kubenswrapper[4687]: I1203 18:02:07.954525 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ea4641-87b6-4232-8211-aa0e20aa6f5f-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.142103 4687 generic.go:334] "Generic (PLEG): container finished" podID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerID="7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3" exitCode=0 Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.142248 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerDied","Data":"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3"} Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.142274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4ea4641-87b6-4232-8211-aa0e20aa6f5f","Type":"ContainerDied","Data":"8d48b0f2464e04697e919ac5e208cb49d55c4d563274eed4b9ef54c6ef4272cd"} Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.142290 4687 scope.go:117] "RemoveContainer" containerID="7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.142405 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.174278 4687 scope.go:117] "RemoveContainer" containerID="5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.200817 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.233102 4687 scope.go:117] "RemoveContainer" containerID="7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3" Dec 03 18:02:08 crc kubenswrapper[4687]: E1203 18:02:08.233628 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3\": container with ID starting with 7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3 not found: ID does not exist" containerID="7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.233772 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3"} err="failed to get container status \"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3\": rpc error: code = NotFound desc = could not find container \"7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3\": container with ID starting with 7cc680b07907408abeec60cb5e175e48cc1cc9da67961d6ae911fe2cee3ecad3 not found: ID does not exist" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.233876 4687 scope.go:117] "RemoveContainer" containerID="5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730" Dec 03 18:02:08 crc kubenswrapper[4687]: E1203 18:02:08.234323 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730\": container with ID starting with 5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730 not found: ID does not exist" containerID="5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.234356 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730"} err="failed to get container status \"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730\": rpc error: code = NotFound desc = could not find container \"5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730\": container with ID starting with 5bc081dcaa50787cda332845538855677e29f11feac433fa8a0ed2995afa5730 not found: ID does not exist" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.291821 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.347939 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:08 crc kubenswrapper[4687]: E1203 18:02:08.348491 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-api" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.348515 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-api" Dec 03 18:02:08 crc kubenswrapper[4687]: E1203 18:02:08.348543 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-log" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.348553 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-log" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.348793 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-api" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.348816 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" containerName="nova-api-log" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.350137 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.352589 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.352767 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.354753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.355491 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494647 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494761 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494814 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6j4\" (UniqueName: \"kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494903 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494933 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.494982 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596445 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596544 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596587 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m6j4\" (UniqueName: \"kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596670 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.596753 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.597257 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.602012 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.603221 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.604524 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.614543 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.618637 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m6j4\" (UniqueName: \"kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4\") pod \"nova-api-0\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.678586 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.761846 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:02:08 crc kubenswrapper[4687]: I1203 18:02:08.784027 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.058282 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.161657 4687 generic.go:334] "Generic (PLEG): container finished" podID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerID="0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819" exitCode=0 Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.161809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerDied","Data":"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819"} Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.161849 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ebcc64a8-d12b-4430-97d4-a61051fc6306","Type":"ContainerDied","Data":"1af14a07a78b5cc6d9910a8cf9b648d451fa513b76f38c1a223361a56f06ef2b"} Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.161872 4687 scope.go:117] "RemoveContainer" containerID="332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.162007 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.182106 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.194397 4687 scope.go:117] "RemoveContainer" containerID="32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212055 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212252 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftlnz\" (UniqueName: \"kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212297 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212331 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212361 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212416 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.212542 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd\") pod \"ebcc64a8-d12b-4430-97d4-a61051fc6306\" (UID: \"ebcc64a8-d12b-4430-97d4-a61051fc6306\") " Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.213799 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.221281 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.228654 4687 scope.go:117] "RemoveContainer" containerID="0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.231715 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts" (OuterVolumeSpecName: "scripts") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.234414 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz" (OuterVolumeSpecName: "kube-api-access-ftlnz") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "kube-api-access-ftlnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.262920 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.276440 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.280514 4687 scope.go:117] "RemoveContainer" containerID="ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.319735 4687 scope.go:117] "RemoveContainer" containerID="332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.321508 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f\": container with ID starting with 332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f not found: ID does not exist" containerID="332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.321542 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f"} err="failed to get container status \"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f\": rpc error: code = NotFound desc = could not find container \"332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f\": container with ID starting with 332299d043cb79c7763cfbdb228d5ddc340ee9587f185ad12bc09a6f33ed241f not found: ID does not exist" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.321569 4687 scope.go:117] "RemoveContainer" containerID="32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.323474 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.323515 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftlnz\" (UniqueName: \"kubernetes.io/projected/ebcc64a8-d12b-4430-97d4-a61051fc6306-kube-api-access-ftlnz\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.323531 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.323542 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.323553 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ebcc64a8-d12b-4430-97d4-a61051fc6306-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.325677 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176\": container with ID starting with 32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176 not found: ID does not exist" containerID="32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.325726 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176"} err="failed to get container status \"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176\": rpc error: code = NotFound desc = could not find container \"32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176\": container with ID starting with 32737571499baad093aa94278d188ac7fd6167582939ace277d28cb4f3efb176 not found: ID does not exist" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.325758 4687 scope.go:117] "RemoveContainer" containerID="0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.326080 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819\": container with ID starting with 0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819 not found: ID does not exist" containerID="0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.326108 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819"} err="failed to get container status \"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819\": rpc error: code = NotFound desc = could not find container \"0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819\": container with ID starting with 0dd651e88e7fd4a6f2583f72542ef04a2a547becb524ca0bfca2559b892a5819 not found: ID does not exist" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.326146 4687 scope.go:117] "RemoveContainer" containerID="ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.326566 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b\": container with ID starting with ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b not found: ID does not exist" containerID="ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.326589 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b"} err="failed to get container status \"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b\": rpc error: code = NotFound desc = could not find container \"ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b\": container with ID starting with ba6db7097e3e53902883d8996b5cd2c209ee33ddef9aa6cf4fdee0f10564c17b not found: ID does not exist" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.350559 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.360842 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z97pq"] Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.370547 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.370609 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.370656 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="sg-core" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.370667 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="sg-core" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.370693 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-notification-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.370702 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-notification-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: E1203 18:02:09.370738 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-central-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.370747 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-central-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.371183 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="sg-core" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.371219 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="proxy-httpd" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.371244 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-notification-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.371258 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" containerName="ceilometer-central-agent" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.372195 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z97pq"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.372359 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.374935 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.375242 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.396742 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data" (OuterVolumeSpecName: "config-data") pod "ebcc64a8-d12b-4430-97d4-a61051fc6306" (UID: "ebcc64a8-d12b-4430-97d4-a61051fc6306"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.426053 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ea4641-87b6-4232-8211-aa0e20aa6f5f" path="/var/lib/kubelet/pods/a4ea4641-87b6-4232-8211-aa0e20aa6f5f/volumes" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.435486 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.435517 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebcc64a8-d12b-4430-97d4-a61051fc6306-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.489995 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.502891 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.512724 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.515497 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.518278 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.521255 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.536876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.537206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmh4v\" (UniqueName: \"kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.537405 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.537563 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.547234 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639570 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639625 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639693 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639742 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6fbl\" (UniqueName: \"kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639777 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639792 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639813 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmh4v\" (UniqueName: \"kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639833 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639891 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.639914 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.643385 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.644032 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.644665 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.656759 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmh4v\" (UniqueName: \"kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v\") pod \"nova-cell1-cell-mapping-z97pq\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.703560 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741589 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6fbl\" (UniqueName: \"kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741646 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741666 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741703 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741784 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.741825 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.743386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.743841 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.747216 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.748491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.748522 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.748990 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.761657 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6fbl\" (UniqueName: \"kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl\") pod \"ceilometer-0\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " pod="openstack/ceilometer-0" Dec 03 18:02:09 crc kubenswrapper[4687]: I1203 18:02:09.835761 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:10 crc kubenswrapper[4687]: W1203 18:02:10.166314 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2671f702_a121_43e8_be5d_b77f30d600b4.slice/crio-95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95 WatchSource:0}: Error finding container 95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95: Status 404 returned error can't find the container with id 95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95 Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.167958 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z97pq"] Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.184747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerStarted","Data":"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec"} Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.184820 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerStarted","Data":"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84"} Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.184836 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerStarted","Data":"c09072f00919d6f069099f0e5b4c7b0921eca45155813f7d031c6d4750f3a8f1"} Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.218786 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.218761315 podStartE2EDuration="2.218761315s" podCreationTimestamp="2025-12-03 18:02:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:10.204484348 +0000 UTC m=+1363.095179791" watchObservedRunningTime="2025-12-03 18:02:10.218761315 +0000 UTC m=+1363.109456748" Dec 03 18:02:10 crc kubenswrapper[4687]: I1203 18:02:10.350310 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:10 crc kubenswrapper[4687]: W1203 18:02:10.362595 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6131c20a_aa01_4135_a19e_840a5cd9c5d8.slice/crio-e2fa482679213ac2cc23e74725fb6d70815be8b21271b8fec1990f110b1afb33 WatchSource:0}: Error finding container e2fa482679213ac2cc23e74725fb6d70815be8b21271b8fec1990f110b1afb33: Status 404 returned error can't find the container with id e2fa482679213ac2cc23e74725fb6d70815be8b21271b8fec1990f110b1afb33 Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.195533 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerStarted","Data":"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73"} Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.195852 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerStarted","Data":"e2fa482679213ac2cc23e74725fb6d70815be8b21271b8fec1990f110b1afb33"} Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.198962 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z97pq" event={"ID":"2671f702-a121-43e8-be5d-b77f30d600b4","Type":"ContainerStarted","Data":"6f8e3ab246b109f9380e049915ef90065d9264c2059d1e2f8c361ad7cd97211a"} Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.199051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z97pq" event={"ID":"2671f702-a121-43e8-be5d-b77f30d600b4","Type":"ContainerStarted","Data":"95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95"} Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.222933 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z97pq" podStartSLOduration=2.222916222 podStartE2EDuration="2.222916222s" podCreationTimestamp="2025-12-03 18:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:11.216356514 +0000 UTC m=+1364.107051947" watchObservedRunningTime="2025-12-03 18:02:11.222916222 +0000 UTC m=+1364.113611655" Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.419465 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebcc64a8-d12b-4430-97d4-a61051fc6306" path="/var/lib/kubelet/pods/ebcc64a8-d12b-4430-97d4-a61051fc6306/volumes" Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.640257 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.710287 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:02:11 crc kubenswrapper[4687]: I1203 18:02:11.710622 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="dnsmasq-dns" containerID="cri-o://5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500" gracePeriod=10 Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.212195 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerStarted","Data":"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b"} Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.213608 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.215848 4687 generic.go:334] "Generic (PLEG): container finished" podID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerID="5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500" exitCode=0 Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.217066 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" event={"ID":"2126988c-e607-43c2-b47a-c9935c88fa0b","Type":"ContainerDied","Data":"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500"} Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.217210 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" event={"ID":"2126988c-e607-43c2-b47a-c9935c88fa0b","Type":"ContainerDied","Data":"527c0467988ac6399394b80c87e339f9d122b446be4b3a0f18da352608fb47ff"} Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.217306 4687 scope.go:117] "RemoveContainer" containerID="5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.248872 4687 scope.go:117] "RemoveContainer" containerID="2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.279721 4687 scope.go:117] "RemoveContainer" containerID="5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500" Dec 03 18:02:12 crc kubenswrapper[4687]: E1203 18:02:12.280485 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500\": container with ID starting with 5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500 not found: ID does not exist" containerID="5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.280523 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500"} err="failed to get container status \"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500\": rpc error: code = NotFound desc = could not find container \"5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500\": container with ID starting with 5437d96528f6288695569f95eea552793fef172edcaf601341447e97fa7eb500 not found: ID does not exist" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.280550 4687 scope.go:117] "RemoveContainer" containerID="2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3" Dec 03 18:02:12 crc kubenswrapper[4687]: E1203 18:02:12.280877 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3\": container with ID starting with 2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3 not found: ID does not exist" containerID="2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.280908 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3"} err="failed to get container status \"2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3\": rpc error: code = NotFound desc = could not find container \"2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3\": container with ID starting with 2ebf69e0ee3ea3b02d0eb35cbd440cf1fe2496c432af845d37c3d9e3c270bbd3 not found: ID does not exist" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306179 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306341 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306412 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8hzr\" (UniqueName: \"kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306507 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306569 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.306606 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc\") pod \"2126988c-e607-43c2-b47a-c9935c88fa0b\" (UID: \"2126988c-e607-43c2-b47a-c9935c88fa0b\") " Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.316364 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr" (OuterVolumeSpecName: "kube-api-access-c8hzr") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "kube-api-access-c8hzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.359096 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config" (OuterVolumeSpecName: "config") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.366708 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.378321 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.387954 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.398670 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2126988c-e607-43c2-b47a-c9935c88fa0b" (UID: "2126988c-e607-43c2-b47a-c9935c88fa0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.408927 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.408967 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8hzr\" (UniqueName: \"kubernetes.io/projected/2126988c-e607-43c2-b47a-c9935c88fa0b-kube-api-access-c8hzr\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.408985 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.408998 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.409011 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:12 crc kubenswrapper[4687]: I1203 18:02:12.409022 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2126988c-e607-43c2-b47a-c9935c88fa0b-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:13 crc kubenswrapper[4687]: I1203 18:02:13.236620 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerStarted","Data":"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55"} Dec 03 18:02:13 crc kubenswrapper[4687]: I1203 18:02:13.240008 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" Dec 03 18:02:13 crc kubenswrapper[4687]: I1203 18:02:13.298944 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:02:13 crc kubenswrapper[4687]: I1203 18:02:13.310366 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-nmjc2"] Dec 03 18:02:13 crc kubenswrapper[4687]: I1203 18:02:13.454740 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" path="/var/lib/kubelet/pods/2126988c-e607-43c2-b47a-c9935c88fa0b/volumes" Dec 03 18:02:14 crc kubenswrapper[4687]: I1203 18:02:14.251312 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerStarted","Data":"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29"} Dec 03 18:02:14 crc kubenswrapper[4687]: I1203 18:02:14.251970 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:02:14 crc kubenswrapper[4687]: I1203 18:02:14.287859 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9392844230000001 podStartE2EDuration="5.287840922s" podCreationTimestamp="2025-12-03 18:02:09 +0000 UTC" firstStartedPulling="2025-12-03 18:02:10.365109977 +0000 UTC m=+1363.255805410" lastFinishedPulling="2025-12-03 18:02:13.713666476 +0000 UTC m=+1366.604361909" observedRunningTime="2025-12-03 18:02:14.282560608 +0000 UTC m=+1367.173256041" watchObservedRunningTime="2025-12-03 18:02:14.287840922 +0000 UTC m=+1367.178536355" Dec 03 18:02:15 crc kubenswrapper[4687]: I1203 18:02:15.271445 4687 generic.go:334] "Generic (PLEG): container finished" podID="2671f702-a121-43e8-be5d-b77f30d600b4" containerID="6f8e3ab246b109f9380e049915ef90065d9264c2059d1e2f8c361ad7cd97211a" exitCode=0 Dec 03 18:02:15 crc kubenswrapper[4687]: I1203 18:02:15.271990 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z97pq" event={"ID":"2671f702-a121-43e8-be5d-b77f30d600b4","Type":"ContainerDied","Data":"6f8e3ab246b109f9380e049915ef90065d9264c2059d1e2f8c361ad7cd97211a"} Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.670278 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.796157 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts\") pod \"2671f702-a121-43e8-be5d-b77f30d600b4\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.796364 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle\") pod \"2671f702-a121-43e8-be5d-b77f30d600b4\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.797471 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data\") pod \"2671f702-a121-43e8-be5d-b77f30d600b4\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.797544 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmh4v\" (UniqueName: \"kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v\") pod \"2671f702-a121-43e8-be5d-b77f30d600b4\" (UID: \"2671f702-a121-43e8-be5d-b77f30d600b4\") " Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.803139 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts" (OuterVolumeSpecName: "scripts") pod "2671f702-a121-43e8-be5d-b77f30d600b4" (UID: "2671f702-a121-43e8-be5d-b77f30d600b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.812116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v" (OuterVolumeSpecName: "kube-api-access-tmh4v") pod "2671f702-a121-43e8-be5d-b77f30d600b4" (UID: "2671f702-a121-43e8-be5d-b77f30d600b4"). InnerVolumeSpecName "kube-api-access-tmh4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.840907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2671f702-a121-43e8-be5d-b77f30d600b4" (UID: "2671f702-a121-43e8-be5d-b77f30d600b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.842296 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data" (OuterVolumeSpecName: "config-data") pod "2671f702-a121-43e8-be5d-b77f30d600b4" (UID: "2671f702-a121-43e8-be5d-b77f30d600b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.900925 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.902805 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.903405 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2671f702-a121-43e8-be5d-b77f30d600b4-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:16 crc kubenswrapper[4687]: I1203 18:02:16.903547 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmh4v\" (UniqueName: \"kubernetes.io/projected/2671f702-a121-43e8-be5d-b77f30d600b4-kube-api-access-tmh4v\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.046601 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-nmjc2" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: i/o timeout" Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.297909 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z97pq" event={"ID":"2671f702-a121-43e8-be5d-b77f30d600b4","Type":"ContainerDied","Data":"95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95"} Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.297959 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95999372b2c7437e58ac29fa2473852bf98b66d6a8b00800cd3365d3d2df6c95" Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.298023 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z97pq" Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.525648 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.525933 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-log" containerID="cri-o://3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" gracePeriod=30 Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.526089 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-api" containerID="cri-o://bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" gracePeriod=30 Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.541746 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.542022 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="bc519674-30e5-4d39-a64a-8f483b144211" containerName="nova-scheduler-scheduler" containerID="cri-o://48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2" gracePeriod=30 Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.562284 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.562689 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" containerID="cri-o://a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017" gracePeriod=30 Dec 03 18:02:17 crc kubenswrapper[4687]: I1203 18:02:17.563083 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" containerID="cri-o://cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278" gracePeriod=30 Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.257279 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.309248 4687 generic.go:334] "Generic (PLEG): container finished" podID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerID="a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017" exitCode=143 Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.309357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerDied","Data":"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017"} Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.311992 4687 generic.go:334] "Generic (PLEG): container finished" podID="b93569ee-e954-4736-8b09-b92b48690d98" containerID="bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" exitCode=0 Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312020 4687 generic.go:334] "Generic (PLEG): container finished" podID="b93569ee-e954-4736-8b09-b92b48690d98" containerID="3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" exitCode=143 Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerDied","Data":"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec"} Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312054 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312069 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerDied","Data":"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84"} Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b93569ee-e954-4736-8b09-b92b48690d98","Type":"ContainerDied","Data":"c09072f00919d6f069099f0e5b4c7b0921eca45155813f7d031c6d4750f3a8f1"} Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.312097 4687 scope.go:117] "RemoveContainer" containerID="bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333702 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333751 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333834 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333860 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333925 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m6j4\" (UniqueName: \"kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.333970 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs\") pod \"b93569ee-e954-4736-8b09-b92b48690d98\" (UID: \"b93569ee-e954-4736-8b09-b92b48690d98\") " Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.334211 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs" (OuterVolumeSpecName: "logs") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.334447 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b93569ee-e954-4736-8b09-b92b48690d98-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.340137 4687 scope.go:117] "RemoveContainer" containerID="3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.342829 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4" (OuterVolumeSpecName: "kube-api-access-4m6j4") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "kube-api-access-4m6j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.367444 4687 scope.go:117] "RemoveContainer" containerID="bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.369646 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec\": container with ID starting with bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec not found: ID does not exist" containerID="bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.369685 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec"} err="failed to get container status \"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec\": rpc error: code = NotFound desc = could not find container \"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec\": container with ID starting with bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec not found: ID does not exist" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.369705 4687 scope.go:117] "RemoveContainer" containerID="3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.370807 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data" (OuterVolumeSpecName: "config-data") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.375226 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84\": container with ID starting with 3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84 not found: ID does not exist" containerID="3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.375272 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84"} err="failed to get container status \"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84\": rpc error: code = NotFound desc = could not find container \"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84\": container with ID starting with 3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84 not found: ID does not exist" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.375288 4687 scope.go:117] "RemoveContainer" containerID="bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.376026 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec"} err="failed to get container status \"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec\": rpc error: code = NotFound desc = could not find container \"bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec\": container with ID starting with bf3495d946bbd96a38f924c201e9cf6ec5737bfeb2f6aa8722e3f2c8817d77ec not found: ID does not exist" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.376086 4687 scope.go:117] "RemoveContainer" containerID="3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.377563 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84"} err="failed to get container status \"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84\": rpc error: code = NotFound desc = could not find container \"3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84\": container with ID starting with 3fdc48757aac1ac9265b1b851a37eeb1b652320fc995c97d3d57c18f45115e84 not found: ID does not exist" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.379702 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.398210 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.400040 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b93569ee-e954-4736-8b09-b92b48690d98" (UID: "b93569ee-e954-4736-8b09-b92b48690d98"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.435971 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m6j4\" (UniqueName: \"kubernetes.io/projected/b93569ee-e954-4736-8b09-b92b48690d98-kube-api-access-4m6j4\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.436005 4687 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.436014 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.436023 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.436031 4687 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b93569ee-e954-4736-8b09-b92b48690d98-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.647188 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.655809 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.681324 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.687597 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2671f702-a121-43e8-be5d-b77f30d600b4" containerName="nova-manage" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.687741 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2671f702-a121-43e8-be5d-b77f30d600b4" containerName="nova-manage" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.687847 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-api" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.687928 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-api" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.688012 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="init" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.688083 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="init" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.688199 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-log" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.688283 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-log" Dec 03 18:02:18 crc kubenswrapper[4687]: E1203 18:02:18.688362 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="dnsmasq-dns" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.688431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="dnsmasq-dns" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.688751 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2126988c-e607-43c2-b47a-c9935c88fa0b" containerName="dnsmasq-dns" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.689029 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2671f702-a121-43e8-be5d-b77f30d600b4" containerName="nova-manage" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.689146 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-api" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.689235 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93569ee-e954-4736-8b09-b92b48690d98" containerName="nova-api-log" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.690419 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.696753 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.703683 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.708499 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.708404 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.842673 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-public-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.842726 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-config-data\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.842939 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b25z6\" (UniqueName: \"kubernetes.io/projected/63033eea-9708-468e-b1e6-87e6882a5c75-kube-api-access-b25z6\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.843014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.843248 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63033eea-9708-468e-b1e6-87e6882a5c75-logs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.843337 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.944989 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-public-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.945042 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-config-data\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.945099 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b25z6\" (UniqueName: \"kubernetes.io/projected/63033eea-9708-468e-b1e6-87e6882a5c75-kube-api-access-b25z6\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.945150 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.945219 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63033eea-9708-468e-b1e6-87e6882a5c75-logs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.945249 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.946173 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63033eea-9708-468e-b1e6-87e6882a5c75-logs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.949984 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-internal-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.950096 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.950456 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-config-data\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.956447 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/63033eea-9708-468e-b1e6-87e6882a5c75-public-tls-certs\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:18 crc kubenswrapper[4687]: I1203 18:02:18.962375 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b25z6\" (UniqueName: \"kubernetes.io/projected/63033eea-9708-468e-b1e6-87e6882a5c75-kube-api-access-b25z6\") pod \"nova-api-0\" (UID: \"63033eea-9708-468e-b1e6-87e6882a5c75\") " pod="openstack/nova-api-0" Dec 03 18:02:19 crc kubenswrapper[4687]: I1203 18:02:19.041090 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 03 18:02:19 crc kubenswrapper[4687]: I1203 18:02:19.419596 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93569ee-e954-4736-8b09-b92b48690d98" path="/var/lib/kubelet/pods/b93569ee-e954-4736-8b09-b92b48690d98/volumes" Dec 03 18:02:19 crc kubenswrapper[4687]: I1203 18:02:19.504799 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 03 18:02:19 crc kubenswrapper[4687]: W1203 18:02:19.509734 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63033eea_9708_468e_b1e6_87e6882a5c75.slice/crio-086d2e9d126431a9166ff3334910e5267357c37f27234455700c6db65636a240 WatchSource:0}: Error finding container 086d2e9d126431a9166ff3334910e5267357c37f27234455700c6db65636a240: Status 404 returned error can't find the container with id 086d2e9d126431a9166ff3334910e5267357c37f27234455700c6db65636a240 Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.312366 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.335369 4687 generic.go:334] "Generic (PLEG): container finished" podID="bc519674-30e5-4d39-a64a-8f483b144211" containerID="48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2" exitCode=0 Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.335437 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc519674-30e5-4d39-a64a-8f483b144211","Type":"ContainerDied","Data":"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2"} Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.335452 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.335488 4687 scope.go:117] "RemoveContainer" containerID="48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.335475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bc519674-30e5-4d39-a64a-8f483b144211","Type":"ContainerDied","Data":"645543121ccc2f487a1617e4a705fbe2c9e0149ee668b981f161e1cd3da2c518"} Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.340579 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63033eea-9708-468e-b1e6-87e6882a5c75","Type":"ContainerStarted","Data":"8063ded012465066b35d8fdf87c92adc7b46d0bed9dc9a79a47db2242100da4a"} Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.340618 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63033eea-9708-468e-b1e6-87e6882a5c75","Type":"ContainerStarted","Data":"164ab4f8d42dc4ebc06a0676ce1b75df7875586b9f72b08af141d0685b303cbb"} Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.340628 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"63033eea-9708-468e-b1e6-87e6882a5c75","Type":"ContainerStarted","Data":"086d2e9d126431a9166ff3334910e5267357c37f27234455700c6db65636a240"} Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.359566 4687 scope.go:117] "RemoveContainer" containerID="48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2" Dec 03 18:02:20 crc kubenswrapper[4687]: E1203 18:02:20.369559 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2\": container with ID starting with 48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2 not found: ID does not exist" containerID="48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.369631 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2"} err="failed to get container status \"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2\": rpc error: code = NotFound desc = could not find container \"48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2\": container with ID starting with 48179c530e138221b99f1f6dd84f81d34fcb4fb0ca1ea321dc8f95c6c71b86f2 not found: ID does not exist" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.471713 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srckx\" (UniqueName: \"kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx\") pod \"bc519674-30e5-4d39-a64a-8f483b144211\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.472405 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data\") pod \"bc519674-30e5-4d39-a64a-8f483b144211\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.472969 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle\") pod \"bc519674-30e5-4d39-a64a-8f483b144211\" (UID: \"bc519674-30e5-4d39-a64a-8f483b144211\") " Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.477490 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx" (OuterVolumeSpecName: "kube-api-access-srckx") pod "bc519674-30e5-4d39-a64a-8f483b144211" (UID: "bc519674-30e5-4d39-a64a-8f483b144211"). InnerVolumeSpecName "kube-api-access-srckx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.503256 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data" (OuterVolumeSpecName: "config-data") pod "bc519674-30e5-4d39-a64a-8f483b144211" (UID: "bc519674-30e5-4d39-a64a-8f483b144211"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.504376 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc519674-30e5-4d39-a64a-8f483b144211" (UID: "bc519674-30e5-4d39-a64a-8f483b144211"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.575620 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srckx\" (UniqueName: \"kubernetes.io/projected/bc519674-30e5-4d39-a64a-8f483b144211-kube-api-access-srckx\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.575899 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.575995 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc519674-30e5-4d39-a64a-8f483b144211-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.692100 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:49054->10.217.0.195:8775: read: connection reset by peer" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.692224 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:49048->10.217.0.195:8775: read: connection reset by peer" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.781882 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.781856261 podStartE2EDuration="2.781856261s" podCreationTimestamp="2025-12-03 18:02:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:20.371026799 +0000 UTC m=+1373.261722232" watchObservedRunningTime="2025-12-03 18:02:20.781856261 +0000 UTC m=+1373.672551714" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.790554 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.802849 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.829916 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:20 crc kubenswrapper[4687]: E1203 18:02:20.830481 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc519674-30e5-4d39-a64a-8f483b144211" containerName="nova-scheduler-scheduler" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.830510 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc519674-30e5-4d39-a64a-8f483b144211" containerName="nova-scheduler-scheduler" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.830774 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc519674-30e5-4d39-a64a-8f483b144211" containerName="nova-scheduler-scheduler" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.831573 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.834533 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.847870 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.992174 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfl7\" (UniqueName: \"kubernetes.io/projected/3be8282f-510f-4d0d-a98f-8aab605e3805-kube-api-access-5dfl7\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.992247 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-config-data\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:20 crc kubenswrapper[4687]: I1203 18:02:20.992391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.096054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-config-data\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.096179 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.096261 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dfl7\" (UniqueName: \"kubernetes.io/projected/3be8282f-510f-4d0d-a98f-8aab605e3805-kube-api-access-5dfl7\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.106231 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-config-data\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.116785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3be8282f-510f-4d0d-a98f-8aab605e3805-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.121679 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dfl7\" (UniqueName: \"kubernetes.io/projected/3be8282f-510f-4d0d-a98f-8aab605e3805-kube-api-access-5dfl7\") pod \"nova-scheduler-0\" (UID: \"3be8282f-510f-4d0d-a98f-8aab605e3805\") " pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.165264 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.285500 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.358997 4687 generic.go:334] "Generic (PLEG): container finished" podID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerID="cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278" exitCode=0 Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.359069 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerDied","Data":"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278"} Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.359110 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"00bec5d1-2b13-41e8-8204-d0aff2afc9d2","Type":"ContainerDied","Data":"88386ad4a54b803853bdf840a0451d67080b804c0f89bc3279de2f29bd0e5ef4"} Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.359153 4687 scope.go:117] "RemoveContainer" containerID="cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.359303 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.391226 4687 scope.go:117] "RemoveContainer" containerID="a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.402880 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs\") pod \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.403054 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs\") pod \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.403086 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle\") pod \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.403138 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data\") pod \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.403242 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qk5\" (UniqueName: \"kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5\") pod \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\" (UID: \"00bec5d1-2b13-41e8-8204-d0aff2afc9d2\") " Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.405030 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs" (OuterVolumeSpecName: "logs") pod "00bec5d1-2b13-41e8-8204-d0aff2afc9d2" (UID: "00bec5d1-2b13-41e8-8204-d0aff2afc9d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.409507 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5" (OuterVolumeSpecName: "kube-api-access-86qk5") pod "00bec5d1-2b13-41e8-8204-d0aff2afc9d2" (UID: "00bec5d1-2b13-41e8-8204-d0aff2afc9d2"). InnerVolumeSpecName "kube-api-access-86qk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.413432 4687 scope.go:117] "RemoveContainer" containerID="cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278" Dec 03 18:02:21 crc kubenswrapper[4687]: E1203 18:02:21.413973 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278\": container with ID starting with cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278 not found: ID does not exist" containerID="cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.414004 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278"} err="failed to get container status \"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278\": rpc error: code = NotFound desc = could not find container \"cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278\": container with ID starting with cd268d7da28dc3a77271bdfeff025f7eada7ebfae56a6a5df27352683e1b9278 not found: ID does not exist" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.414024 4687 scope.go:117] "RemoveContainer" containerID="a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017" Dec 03 18:02:21 crc kubenswrapper[4687]: E1203 18:02:21.423851 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017\": container with ID starting with a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017 not found: ID does not exist" containerID="a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.423890 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017"} err="failed to get container status \"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017\": rpc error: code = NotFound desc = could not find container \"a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017\": container with ID starting with a4188b430edc5b8315af65e9923065ea62919e7ca1e9705bbf3df2387d7f8017 not found: ID does not exist" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.426984 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc519674-30e5-4d39-a64a-8f483b144211" path="/var/lib/kubelet/pods/bc519674-30e5-4d39-a64a-8f483b144211/volumes" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.433292 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data" (OuterVolumeSpecName: "config-data") pod "00bec5d1-2b13-41e8-8204-d0aff2afc9d2" (UID: "00bec5d1-2b13-41e8-8204-d0aff2afc9d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.440514 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00bec5d1-2b13-41e8-8204-d0aff2afc9d2" (UID: "00bec5d1-2b13-41e8-8204-d0aff2afc9d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.468302 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "00bec5d1-2b13-41e8-8204-d0aff2afc9d2" (UID: "00bec5d1-2b13-41e8-8204-d0aff2afc9d2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.505212 4687 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-logs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.505241 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.505252 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.505261 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.505272 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86qk5\" (UniqueName: \"kubernetes.io/projected/00bec5d1-2b13-41e8-8204-d0aff2afc9d2-kube-api-access-86qk5\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.666009 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.848487 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.862568 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.875521 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:21 crc kubenswrapper[4687]: E1203 18:02:21.875948 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.875968 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" Dec 03 18:02:21 crc kubenswrapper[4687]: E1203 18:02:21.875994 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.876001 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.876188 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-metadata" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.876210 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" containerName="nova-metadata-log" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.877425 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.893948 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.894696 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 03 18:02:21 crc kubenswrapper[4687]: I1203 18:02:21.900076 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.015509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.015621 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ff347c-1775-431c-bc91-ed5a80ee620e-logs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.015650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w67hc\" (UniqueName: \"kubernetes.io/projected/c0ff347c-1775-431c-bc91-ed5a80ee620e-kube-api-access-w67hc\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.015718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-config-data\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.015829 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.117700 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.117830 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.117884 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ff347c-1775-431c-bc91-ed5a80ee620e-logs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.117907 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w67hc\" (UniqueName: \"kubernetes.io/projected/c0ff347c-1775-431c-bc91-ed5a80ee620e-kube-api-access-w67hc\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.117969 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-config-data\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.118683 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0ff347c-1775-431c-bc91-ed5a80ee620e-logs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.122605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-config-data\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.134104 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.135150 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0ff347c-1775-431c-bc91-ed5a80ee620e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.135279 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w67hc\" (UniqueName: \"kubernetes.io/projected/c0ff347c-1775-431c-bc91-ed5a80ee620e-kube-api-access-w67hc\") pod \"nova-metadata-0\" (UID: \"c0ff347c-1775-431c-bc91-ed5a80ee620e\") " pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.205383 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.396338 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3be8282f-510f-4d0d-a98f-8aab605e3805","Type":"ContainerStarted","Data":"3e91a225f2a725fab2c2904841baf42ae55e3b563f58c2604a0eb2368b5cb4d2"} Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.396811 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3be8282f-510f-4d0d-a98f-8aab605e3805","Type":"ContainerStarted","Data":"623fa45e6afd6b580d5c5a8366850135fb3f95dd8122456cefcdcf1d15504883"} Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.430699 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.430675622 podStartE2EDuration="2.430675622s" podCreationTimestamp="2025-12-03 18:02:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:22.413846276 +0000 UTC m=+1375.304541709" watchObservedRunningTime="2025-12-03 18:02:22.430675622 +0000 UTC m=+1375.321371065" Dec 03 18:02:22 crc kubenswrapper[4687]: I1203 18:02:22.719003 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 03 18:02:23 crc kubenswrapper[4687]: I1203 18:02:23.427551 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00bec5d1-2b13-41e8-8204-d0aff2afc9d2" path="/var/lib/kubelet/pods/00bec5d1-2b13-41e8-8204-d0aff2afc9d2/volumes" Dec 03 18:02:23 crc kubenswrapper[4687]: I1203 18:02:23.429194 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0ff347c-1775-431c-bc91-ed5a80ee620e","Type":"ContainerStarted","Data":"8109839f8623707fe59b39ade67728846c37ebba1e64d50a3b2252a50958c877"} Dec 03 18:02:23 crc kubenswrapper[4687]: I1203 18:02:23.429232 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0ff347c-1775-431c-bc91-ed5a80ee620e","Type":"ContainerStarted","Data":"213fa5b7a9cc3432efb927972c58b1fd58bfeda207b228f44371869836ba6192"} Dec 03 18:02:23 crc kubenswrapper[4687]: I1203 18:02:23.429253 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c0ff347c-1775-431c-bc91-ed5a80ee620e","Type":"ContainerStarted","Data":"ce93b941dee00acfe66e0f7cae8a2a45ded9f7acb09a2df00322c52d79e72d6c"} Dec 03 18:02:23 crc kubenswrapper[4687]: I1203 18:02:23.461050 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.461031047 podStartE2EDuration="2.461031047s" podCreationTimestamp="2025-12-03 18:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:02:23.444512431 +0000 UTC m=+1376.335207874" watchObservedRunningTime="2025-12-03 18:02:23.461031047 +0000 UTC m=+1376.351726480" Dec 03 18:02:26 crc kubenswrapper[4687]: I1203 18:02:26.166082 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 03 18:02:27 crc kubenswrapper[4687]: I1203 18:02:27.206033 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:02:27 crc kubenswrapper[4687]: I1203 18:02:27.206199 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 03 18:02:29 crc kubenswrapper[4687]: I1203 18:02:29.041375 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:02:29 crc kubenswrapper[4687]: I1203 18:02:29.041680 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 03 18:02:30 crc kubenswrapper[4687]: I1203 18:02:30.054450 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63033eea-9708-468e-b1e6-87e6882a5c75" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:02:30 crc kubenswrapper[4687]: I1203 18:02:30.054493 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="63033eea-9708-468e-b1e6-87e6882a5c75" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:02:31 crc kubenswrapper[4687]: I1203 18:02:31.165762 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 03 18:02:31 crc kubenswrapper[4687]: I1203 18:02:31.199859 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 03 18:02:31 crc kubenswrapper[4687]: I1203 18:02:31.540870 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 03 18:02:32 crc kubenswrapper[4687]: I1203 18:02:32.206537 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 18:02:32 crc kubenswrapper[4687]: I1203 18:02:32.206580 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 03 18:02:33 crc kubenswrapper[4687]: I1203 18:02:33.218265 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c0ff347c-1775-431c-bc91-ed5a80ee620e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:02:33 crc kubenswrapper[4687]: I1203 18:02:33.218342 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c0ff347c-1775-431c-bc91-ed5a80ee620e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.047698 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.048516 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.050630 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.056893 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.621553 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.627754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 03 18:02:39 crc kubenswrapper[4687]: I1203 18:02:39.934597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 18:02:42 crc kubenswrapper[4687]: I1203 18:02:42.212043 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 18:02:42 crc kubenswrapper[4687]: I1203 18:02:42.212587 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 03 18:02:42 crc kubenswrapper[4687]: I1203 18:02:42.221536 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 18:02:42 crc kubenswrapper[4687]: I1203 18:02:42.221612 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 03 18:02:43 crc kubenswrapper[4687]: I1203 18:02:43.527063 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:43 crc kubenswrapper[4687]: I1203 18:02:43.527546 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c337091b-5995-4214-9161-18188eb806aa" containerName="kube-state-metrics" containerID="cri-o://d839dd50b9f29bcfc89ca4af10103c012516b56e1588681a56a8d2b9c30150a3" gracePeriod=30 Dec 03 18:02:43 crc kubenswrapper[4687]: I1203 18:02:43.657916 4687 generic.go:334] "Generic (PLEG): container finished" podID="c337091b-5995-4214-9161-18188eb806aa" containerID="d839dd50b9f29bcfc89ca4af10103c012516b56e1588681a56a8d2b9c30150a3" exitCode=2 Dec 03 18:02:43 crc kubenswrapper[4687]: I1203 18:02:43.658834 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c337091b-5995-4214-9161-18188eb806aa","Type":"ContainerDied","Data":"d839dd50b9f29bcfc89ca4af10103c012516b56e1588681a56a8d2b9c30150a3"} Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.135317 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.251389 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtf42\" (UniqueName: \"kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42\") pod \"c337091b-5995-4214-9161-18188eb806aa\" (UID: \"c337091b-5995-4214-9161-18188eb806aa\") " Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.259895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42" (OuterVolumeSpecName: "kube-api-access-wtf42") pod "c337091b-5995-4214-9161-18188eb806aa" (UID: "c337091b-5995-4214-9161-18188eb806aa"). InnerVolumeSpecName "kube-api-access-wtf42". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.353683 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtf42\" (UniqueName: \"kubernetes.io/projected/c337091b-5995-4214-9161-18188eb806aa-kube-api-access-wtf42\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.670399 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c337091b-5995-4214-9161-18188eb806aa","Type":"ContainerDied","Data":"cdfab4f6fe124ddd0854f023b10cf3ce89dc10a8c88a9dd87321220a40b1d24a"} Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.670444 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.670718 4687 scope.go:117] "RemoveContainer" containerID="d839dd50b9f29bcfc89ca4af10103c012516b56e1588681a56a8d2b9c30150a3" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.722744 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.746213 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.756501 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:44 crc kubenswrapper[4687]: E1203 18:02:44.757097 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c337091b-5995-4214-9161-18188eb806aa" containerName="kube-state-metrics" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.757140 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c337091b-5995-4214-9161-18188eb806aa" containerName="kube-state-metrics" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.757418 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c337091b-5995-4214-9161-18188eb806aa" containerName="kube-state-metrics" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.758755 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.761249 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.761287 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.765405 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.862204 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.862308 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hgb\" (UniqueName: \"kubernetes.io/projected/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-api-access-x2hgb\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.862542 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.862773 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.964702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.964866 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.964985 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.965017 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2hgb\" (UniqueName: \"kubernetes.io/projected/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-api-access-x2hgb\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.970597 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.971179 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.972708 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:44 crc kubenswrapper[4687]: I1203 18:02:44.986873 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2hgb\" (UniqueName: \"kubernetes.io/projected/976b9b5d-29fa-48e5-a77a-f3f5a480ad94-kube-api-access-x2hgb\") pod \"kube-state-metrics-0\" (UID: \"976b9b5d-29fa-48e5-a77a-f3f5a480ad94\") " pod="openstack/kube-state-metrics-0" Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.077400 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.418314 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c337091b-5995-4214-9161-18188eb806aa" path="/var/lib/kubelet/pods/c337091b-5995-4214-9161-18188eb806aa/volumes" Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.470564 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.470950 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="proxy-httpd" containerID="cri-o://5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29" gracePeriod=30 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.471017 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="sg-core" containerID="cri-o://99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55" gracePeriod=30 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.471081 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-central-agent" containerID="cri-o://8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73" gracePeriod=30 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.471031 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-notification-agent" containerID="cri-o://3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b" gracePeriod=30 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.538333 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 03 18:02:45 crc kubenswrapper[4687]: W1203 18:02:45.540611 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod976b9b5d_29fa_48e5_a77a_f3f5a480ad94.slice/crio-c3df563657645387ed0b8e2809ca20cf59139f1a4ed134757e750cd6c6fbb776 WatchSource:0}: Error finding container c3df563657645387ed0b8e2809ca20cf59139f1a4ed134757e750cd6c6fbb776: Status 404 returned error can't find the container with id c3df563657645387ed0b8e2809ca20cf59139f1a4ed134757e750cd6c6fbb776 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.543623 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.682529 4687 generic.go:334] "Generic (PLEG): container finished" podID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerID="5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29" exitCode=0 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.682565 4687 generic.go:334] "Generic (PLEG): container finished" podID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerID="99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55" exitCode=2 Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.682595 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerDied","Data":"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29"} Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.682656 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerDied","Data":"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55"} Dec 03 18:02:45 crc kubenswrapper[4687]: I1203 18:02:45.685155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"976b9b5d-29fa-48e5-a77a-f3f5a480ad94","Type":"ContainerStarted","Data":"c3df563657645387ed0b8e2809ca20cf59139f1a4ed134757e750cd6c6fbb776"} Dec 03 18:02:46 crc kubenswrapper[4687]: I1203 18:02:46.697781 4687 generic.go:334] "Generic (PLEG): container finished" podID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerID="8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73" exitCode=0 Dec 03 18:02:46 crc kubenswrapper[4687]: I1203 18:02:46.697978 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerDied","Data":"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73"} Dec 03 18:02:46 crc kubenswrapper[4687]: I1203 18:02:46.702089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"976b9b5d-29fa-48e5-a77a-f3f5a480ad94","Type":"ContainerStarted","Data":"8fe0e8348f7afedef1040477aa91ccebb9d2e46af09f1b225b489b60a629e940"} Dec 03 18:02:46 crc kubenswrapper[4687]: I1203 18:02:46.703061 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 03 18:02:46 crc kubenswrapper[4687]: I1203 18:02:46.727673 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.897864703 podStartE2EDuration="2.727655029s" podCreationTimestamp="2025-12-03 18:02:44 +0000 UTC" firstStartedPulling="2025-12-03 18:02:45.543390696 +0000 UTC m=+1398.434086129" lastFinishedPulling="2025-12-03 18:02:46.373181022 +0000 UTC m=+1399.263876455" observedRunningTime="2025-12-03 18:02:46.716397905 +0000 UTC m=+1399.607093338" watchObservedRunningTime="2025-12-03 18:02:46.727655029 +0000 UTC m=+1399.618350462" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.718770 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.736373 4687 generic.go:334] "Generic (PLEG): container finished" podID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerID="3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b" exitCode=0 Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.736416 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerDied","Data":"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b"} Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.736449 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6131c20a-aa01-4135-a19e-840a5cd9c5d8","Type":"ContainerDied","Data":"e2fa482679213ac2cc23e74725fb6d70815be8b21271b8fec1990f110b1afb33"} Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.736449 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.736486 4687 scope.go:117] "RemoveContainer" containerID="5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.789596 4687 scope.go:117] "RemoveContainer" containerID="99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790284 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6fbl\" (UniqueName: \"kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790359 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790387 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790462 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790524 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790582 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.790637 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle\") pod \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\" (UID: \"6131c20a-aa01-4135-a19e-840a5cd9c5d8\") " Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.791098 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.791488 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.792234 4687 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.792258 4687 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6131c20a-aa01-4135-a19e-840a5cd9c5d8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.812459 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl" (OuterVolumeSpecName: "kube-api-access-b6fbl") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "kube-api-access-b6fbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.813905 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts" (OuterVolumeSpecName: "scripts") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.856101 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.894270 4687 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-scripts\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.894296 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6fbl\" (UniqueName: \"kubernetes.io/projected/6131c20a-aa01-4135-a19e-840a5cd9c5d8-kube-api-access-b6fbl\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.894306 4687 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.910888 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.938491 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data" (OuterVolumeSpecName: "config-data") pod "6131c20a-aa01-4135-a19e-840a5cd9c5d8" (UID: "6131c20a-aa01-4135-a19e-840a5cd9c5d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.996074 4687 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:49 crc kubenswrapper[4687]: I1203 18:02:49.996102 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6131c20a-aa01-4135-a19e-840a5cd9c5d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.026251 4687 scope.go:117] "RemoveContainer" containerID="3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.044547 4687 scope.go:117] "RemoveContainer" containerID="8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.072424 4687 scope.go:117] "RemoveContainer" containerID="5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.075397 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29\": container with ID starting with 5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29 not found: ID does not exist" containerID="5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.075459 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29"} err="failed to get container status \"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29\": rpc error: code = NotFound desc = could not find container \"5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29\": container with ID starting with 5145f89963721671edd1ea3c1de1368f7804f37877b0e751200d13eb3034cb29 not found: ID does not exist" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.075490 4687 scope.go:117] "RemoveContainer" containerID="99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.075801 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.075976 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55\": container with ID starting with 99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55 not found: ID does not exist" containerID="99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.076104 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55"} err="failed to get container status \"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55\": rpc error: code = NotFound desc = could not find container \"99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55\": container with ID starting with 99aa54529756bab346eb92c7f37e82f124a7f77ebbb95ce3cb254c2879d35e55 not found: ID does not exist" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.076223 4687 scope.go:117] "RemoveContainer" containerID="3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.076690 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b\": container with ID starting with 3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b not found: ID does not exist" containerID="3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.076726 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b"} err="failed to get container status \"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b\": rpc error: code = NotFound desc = could not find container \"3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b\": container with ID starting with 3cb44102f896b5e17504814c225848597ab412aaf1664139ae35e5710dfe423b not found: ID does not exist" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.076758 4687 scope.go:117] "RemoveContainer" containerID="8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.077107 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73\": container with ID starting with 8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73 not found: ID does not exist" containerID="8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.077153 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73"} err="failed to get container status \"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73\": rpc error: code = NotFound desc = could not find container \"8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73\": container with ID starting with 8d0fff5802802dee3527c2e484f8f81a02986003dc5dbe5f2d1df5dc079b3f73 not found: ID does not exist" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.098001 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.108386 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.108893 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-notification-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.108915 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-notification-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.108926 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="proxy-httpd" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.108933 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="proxy-httpd" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.108960 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="sg-core" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.108967 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="sg-core" Dec 03 18:02:50 crc kubenswrapper[4687]: E1203 18:02:50.108984 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-central-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.108992 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-central-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.109227 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-notification-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.109255 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="ceilometer-central-agent" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.109270 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="sg-core" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.109282 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" containerName="proxy-httpd" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.111362 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.115261 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.115482 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.115613 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.117167 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.198999 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.199352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.199501 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.199632 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpx5\" (UniqueName: \"kubernetes.io/projected/8bd9cfd0-6df9-424b-b267-98e0a180a758-kube-api-access-9qpx5\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.199734 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-config-data\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.199912 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-scripts\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.200013 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.200096 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302304 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-scripts\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302350 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302375 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302444 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302472 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302508 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302540 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpx5\" (UniqueName: \"kubernetes.io/projected/8bd9cfd0-6df9-424b-b267-98e0a180a758-kube-api-access-9qpx5\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.302571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-config-data\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.303260 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-run-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.303710 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bd9cfd0-6df9-424b-b267-98e0a180a758-log-httpd\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.308148 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.308157 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-scripts\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.309100 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-config-data\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.311538 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.328475 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bd9cfd0-6df9-424b-b267-98e0a180a758-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.353037 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpx5\" (UniqueName: \"kubernetes.io/projected/8bd9cfd0-6df9-424b-b267-98e0a180a758-kube-api-access-9qpx5\") pod \"ceilometer-0\" (UID: \"8bd9cfd0-6df9-424b-b267-98e0a180a758\") " pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.429481 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.896556 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 03 18:02:50 crc kubenswrapper[4687]: I1203 18:02:50.991837 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:02:51 crc kubenswrapper[4687]: I1203 18:02:51.417865 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6131c20a-aa01-4135-a19e-840a5cd9c5d8" path="/var/lib/kubelet/pods/6131c20a-aa01-4135-a19e-840a5cd9c5d8/volumes" Dec 03 18:02:51 crc kubenswrapper[4687]: I1203 18:02:51.757174 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd9cfd0-6df9-424b-b267-98e0a180a758","Type":"ContainerStarted","Data":"e1cad1a651293e887b50aea1827ba3a8ea4066363bfa41eefc757abe89216698"} Dec 03 18:02:52 crc kubenswrapper[4687]: I1203 18:02:52.101856 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:02:52 crc kubenswrapper[4687]: I1203 18:02:52.767412 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd9cfd0-6df9-424b-b267-98e0a180a758","Type":"ContainerStarted","Data":"5e78724b31c394498858e2008379460ca91aff1fb790592021f509b851155b1c"} Dec 03 18:02:54 crc kubenswrapper[4687]: I1203 18:02:54.788008 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd9cfd0-6df9-424b-b267-98e0a180a758","Type":"ContainerStarted","Data":"d319c4950e398646bd401971dc09cc0e15b8a1df56ab72cbc97f4fbdbb6366f2"} Dec 03 18:02:54 crc kubenswrapper[4687]: I1203 18:02:54.788556 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd9cfd0-6df9-424b-b267-98e0a180a758","Type":"ContainerStarted","Data":"13f64e3ca882b4884cb237c622f012c3358e6d9918c404f03a6c553aa9aaf6aa"} Dec 03 18:02:55 crc kubenswrapper[4687]: I1203 18:02:55.107287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 03 18:02:55 crc kubenswrapper[4687]: I1203 18:02:55.953980 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="rabbitmq" containerID="cri-o://eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3" gracePeriod=604796 Dec 03 18:02:56 crc kubenswrapper[4687]: I1203 18:02:56.749656 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="rabbitmq" containerID="cri-o://8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007" gracePeriod=604796 Dec 03 18:02:59 crc kubenswrapper[4687]: I1203 18:02:59.104557 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 03 18:02:59 crc kubenswrapper[4687]: I1203 18:02:59.455278 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.667355 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.737457 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.737918 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5ts\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.737947 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.737971 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.737986 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738023 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738111 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738149 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738175 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738195 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738235 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie\") pod \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\" (UID: \"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a\") " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.738946 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.739095 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.739135 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.743264 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.747681 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.747959 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts" (OuterVolumeSpecName: "kube-api-access-zb5ts") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "kube-api-access-zb5ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.749781 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.751594 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info" (OuterVolumeSpecName: "pod-info") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.756569 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.773227 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data" (OuterVolumeSpecName: "config-data") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.812168 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf" (OuterVolumeSpecName: "server-conf") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840901 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840934 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840943 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840954 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb5ts\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-kube-api-access-zb5ts\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840963 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840972 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840979 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.840989 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.861530 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.871135 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerID="eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3" exitCode=0 Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.871169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerDied","Data":"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3"} Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.871196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a","Type":"ContainerDied","Data":"357d9f30a290c2304ed8732053ff3d8567593144605bd340fd89a90ab6809b43"} Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.871211 4687 scope.go:117] "RemoveContainer" containerID="eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.871328 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.906941 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" (UID: "cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.942764 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.942809 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:02 crc kubenswrapper[4687]: I1203 18:03:02.952926 4687 scope.go:117] "RemoveContainer" containerID="89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.016367 4687 scope.go:117] "RemoveContainer" containerID="eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.016866 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3\": container with ID starting with eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3 not found: ID does not exist" containerID="eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.016949 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3"} err="failed to get container status \"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3\": rpc error: code = NotFound desc = could not find container \"eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3\": container with ID starting with eb1fc78d595fff0a656cbb8b53a6f9ecba31fed5a99343bd16bb0ebc238efce3 not found: ID does not exist" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.016983 4687 scope.go:117] "RemoveContainer" containerID="89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.017563 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16\": container with ID starting with 89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16 not found: ID does not exist" containerID="89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.017596 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16"} err="failed to get container status \"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16\": rpc error: code = NotFound desc = could not find container \"89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16\": container with ID starting with 89617a97e4dffd77e1a02a6c0bfbdca12de28ea668174dca66e85dc06c6c5c16 not found: ID does not exist" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.246294 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.259259 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.271365 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.271769 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.271789 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.271805 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="setup-container" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.271812 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="setup-container" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.272421 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.273634 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.281753 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.282134 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.281838 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.281925 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7vk44" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.282293 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.281976 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.282011 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.291954 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.301744 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.362089 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.362209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.362258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.362284 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.362833 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363241 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363263 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363296 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363324 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363353 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkrzk\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363407 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363469 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info\") pod \"63e536c1-72f7-438c-b34c-b8750dd1796b\" (UID: \"63e536c1-72f7-438c-b34c-b8750dd1796b\") " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363790 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363879 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.363930 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.364024 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.364051 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfx4j\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-kube-api-access-zfx4j\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.365569 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.365644 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.365705 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.365730 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.365762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.366057 4687 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.366079 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.371409 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.412570 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.414393 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.414532 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info" (OuterVolumeSpecName: "pod-info") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.422654 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.425095 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk" (OuterVolumeSpecName: "kube-api-access-hkrzk") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "kube-api-access-hkrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.450538 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a" path="/var/lib/kubelet/pods/cf3c9153-4f5a-4f7f-b4c9-8f3bfdbf5b0a/volumes" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.454293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf" (OuterVolumeSpecName: "server-conf") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.469834 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470202 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470345 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470441 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470538 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470740 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470867 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.470957 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.471105 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.471316 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.472285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfx4j\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-kube-api-access-zfx4j\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.472184 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.472856 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-config-data\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.474265 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.474967 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475094 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475211 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkrzk\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-kube-api-access-hkrzk\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475292 4687 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63e536c1-72f7-438c-b34c-b8750dd1796b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475380 4687 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63e536c1-72f7-438c-b34c-b8750dd1796b-pod-info\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475477 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.475555 4687 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-server-conf\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.478744 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.479350 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.479974 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.480591 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.480883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.487626 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.500548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfx4j\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-kube-api-access-zfx4j\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.502379 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data" (OuterVolumeSpecName: "config-data") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.506806 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "63e536c1-72f7-438c-b34c-b8750dd1796b" (UID: "63e536c1-72f7-438c-b34c-b8750dd1796b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.509735 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.512635 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bef36ed8-b2b0-465c-9719-c9ff963dcd2f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.565212 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"bef36ed8-b2b0-465c-9719-c9ff963dcd2f\") " pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.578020 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e536c1-72f7-438c-b34c-b8750dd1796b-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.578058 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.578072 4687 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63e536c1-72f7-438c-b34c-b8750dd1796b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.596011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.787979 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.788581 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="setup-container" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.788594 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="setup-container" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.788618 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.788624 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.788793 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerName="rabbitmq" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.790064 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.800087 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.886087 4687 generic.go:334] "Generic (PLEG): container finished" podID="63e536c1-72f7-438c-b34c-b8750dd1796b" containerID="8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007" exitCode=0 Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.886157 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerDied","Data":"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007"} Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.886186 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63e536c1-72f7-438c-b34c-b8750dd1796b","Type":"ContainerDied","Data":"a1f26d35841cedbaf471b5d1cb248733134db2cafc47f63168dea48a48cc167b"} Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.886203 4687 scope.go:117] "RemoveContainer" containerID="8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.886247 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.888601 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.888665 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhx67\" (UniqueName: \"kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.888718 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.915828 4687 scope.go:117] "RemoveContainer" containerID="51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.922419 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.935360 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.955929 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.958296 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.961577 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.961794 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.962088 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.962271 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.962467 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.962625 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-psh6c" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.963050 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.971334 4687 scope.go:117] "RemoveContainer" containerID="8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.975764 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007\": container with ID starting with 8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007 not found: ID does not exist" containerID="8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.975800 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007"} err="failed to get container status \"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007\": rpc error: code = NotFound desc = could not find container \"8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007\": container with ID starting with 8f3fec3a9db5c9b37fd01df508eb7be162af7e5b56886edba4cbf274147e6007 not found: ID does not exist" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.975823 4687 scope.go:117] "RemoveContainer" containerID="51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d" Dec 03 18:03:03 crc kubenswrapper[4687]: E1203 18:03:03.976730 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d\": container with ID starting with 51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d not found: ID does not exist" containerID="51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.976784 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d"} err="failed to get container status \"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d\": rpc error: code = NotFound desc = could not find container \"51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d\": container with ID starting with 51403661b58219621b8600e1fcbecbc8d54e535c7a312eedb1e15e95fe4d390d not found: ID does not exist" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.983957 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990264 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990352 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31a63e3-b46e-403c-b1b4-3acd833f453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990399 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990471 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jncg\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-kube-api-access-7jncg\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990499 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990542 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990575 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhx67\" (UniqueName: \"kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990606 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990626 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990653 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990676 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990692 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.990735 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31a63e3-b46e-403c-b1b4-3acd833f453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.991622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:03 crc kubenswrapper[4687]: I1203 18:03:03.991946 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.016191 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhx67\" (UniqueName: \"kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67\") pod \"redhat-operators-d99lx\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.046879 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092624 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092664 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092690 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31a63e3-b46e-403c-b1b4-3acd833f453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092713 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092738 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31a63e3-b46e-403c-b1b4-3acd833f453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092773 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092814 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jncg\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-kube-api-access-7jncg\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092841 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.092892 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.093243 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.093271 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.093634 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.094644 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.094796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.094921 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.095433 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.095577 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b31a63e3-b46e-403c-b1b4-3acd833f453f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.097730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.098639 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b31a63e3-b46e-403c-b1b4-3acd833f453f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.109569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b31a63e3-b46e-403c-b1b4-3acd833f453f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.112630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.113134 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.114905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jncg\" (UniqueName: \"kubernetes.io/projected/b31a63e3-b46e-403c-b1b4-3acd833f453f-kube-api-access-7jncg\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.138770 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b31a63e3-b46e-403c-b1b4-3acd833f453f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.296576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:04 crc kubenswrapper[4687]: W1203 18:03:04.558963 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab922dd_3caa_4df0_9f18_347140283827.slice/crio-1980b54377c5e8446886fc2ed3b76b2b6321720c0b4125d77236332d8cb4ad87 WatchSource:0}: Error finding container 1980b54377c5e8446886fc2ed3b76b2b6321720c0b4125d77236332d8cb4ad87: Status 404 returned error can't find the container with id 1980b54377c5e8446886fc2ed3b76b2b6321720c0b4125d77236332d8cb4ad87 Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.563934 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.791743 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.927344 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bef36ed8-b2b0-465c-9719-c9ff963dcd2f","Type":"ContainerStarted","Data":"6f5b85d11a08b3ad4308ab10586f9ee1dbd1509b5e94c4ebed20cb4081b6711c"} Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.929018 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerStarted","Data":"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc"} Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.929039 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerStarted","Data":"1980b54377c5e8446886fc2ed3b76b2b6321720c0b4125d77236332d8cb4ad87"} Dec 03 18:03:04 crc kubenswrapper[4687]: I1203 18:03:04.937320 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b31a63e3-b46e-403c-b1b4-3acd833f453f","Type":"ContainerStarted","Data":"8e47af241cc066d90f57b3545962eb0b88e7276d55ec8f244ba6fc237e1d3728"} Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.273554 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.279982 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.290932 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.315291 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.421507 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e536c1-72f7-438c-b34c-b8750dd1796b" path="/var/lib/kubelet/pods/63e536c1-72f7-438c-b34c-b8750dd1796b/volumes" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434474 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434558 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434580 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434723 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnfz\" (UniqueName: \"kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.434883 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536328 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536518 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536565 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536593 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536632 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.536688 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnfz\" (UniqueName: \"kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.537569 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.537898 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.538207 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.538497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.538497 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.539519 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.561218 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnfz\" (UniqueName: \"kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz\") pod \"dnsmasq-dns-79bd4cc8c9-2hbx9\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.659807 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.951710 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bef36ed8-b2b0-465c-9719-c9ff963dcd2f","Type":"ContainerStarted","Data":"d944efcd63b0452655c8ecb02feedc4c97dd7179be46fe9d9cf6b9e9ec567785"} Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.957730 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ab922dd-3caa-4df0-9f18-347140283827" containerID="466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc" exitCode=0 Dec 03 18:03:05 crc kubenswrapper[4687]: I1203 18:03:05.957881 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerDied","Data":"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc"} Dec 03 18:03:06 crc kubenswrapper[4687]: W1203 18:03:06.161783 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda376077f_3fb9_4af0_b925_8c2e06aaa5f1.slice/crio-09ec2b2fabe2d0d3d5e9c783ed4366bf0f3743632aa2467b95cdd5ae07cd30fd WatchSource:0}: Error finding container 09ec2b2fabe2d0d3d5e9c783ed4366bf0f3743632aa2467b95cdd5ae07cd30fd: Status 404 returned error can't find the container with id 09ec2b2fabe2d0d3d5e9c783ed4366bf0f3743632aa2467b95cdd5ae07cd30fd Dec 03 18:03:06 crc kubenswrapper[4687]: I1203 18:03:06.163615 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:06 crc kubenswrapper[4687]: I1203 18:03:06.974858 4687 generic.go:334] "Generic (PLEG): container finished" podID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerID="9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0" exitCode=0 Dec 03 18:03:06 crc kubenswrapper[4687]: I1203 18:03:06.974963 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" event={"ID":"a376077f-3fb9-4af0-b925-8c2e06aaa5f1","Type":"ContainerDied","Data":"9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0"} Dec 03 18:03:06 crc kubenswrapper[4687]: I1203 18:03:06.975392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" event={"ID":"a376077f-3fb9-4af0-b925-8c2e06aaa5f1","Type":"ContainerStarted","Data":"09ec2b2fabe2d0d3d5e9c783ed4366bf0f3743632aa2467b95cdd5ae07cd30fd"} Dec 03 18:03:06 crc kubenswrapper[4687]: I1203 18:03:06.979819 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerStarted","Data":"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5"} Dec 03 18:03:07 crc kubenswrapper[4687]: I1203 18:03:07.992236 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b31a63e3-b46e-403c-b1b4-3acd833f453f","Type":"ContainerStarted","Data":"fe1497c370b5fa9c2635601dd99e9555e46d1e8ffe9dbf37c48b117741f0aed5"} Dec 03 18:03:07 crc kubenswrapper[4687]: I1203 18:03:07.995249 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" event={"ID":"a376077f-3fb9-4af0-b925-8c2e06aaa5f1","Type":"ContainerStarted","Data":"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec"} Dec 03 18:03:08 crc kubenswrapper[4687]: I1203 18:03:08.054689 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" podStartSLOduration=3.054662767 podStartE2EDuration="3.054662767s" podCreationTimestamp="2025-12-03 18:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:03:08.043710601 +0000 UTC m=+1420.934406044" watchObservedRunningTime="2025-12-03 18:03:08.054662767 +0000 UTC m=+1420.945358230" Dec 03 18:03:09 crc kubenswrapper[4687]: I1203 18:03:09.007253 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ab922dd-3caa-4df0-9f18-347140283827" containerID="0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5" exitCode=0 Dec 03 18:03:09 crc kubenswrapper[4687]: I1203 18:03:09.007357 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerDied","Data":"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5"} Dec 03 18:03:09 crc kubenswrapper[4687]: I1203 18:03:09.008607 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:10 crc kubenswrapper[4687]: I1203 18:03:10.020052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bd9cfd0-6df9-424b-b267-98e0a180a758","Type":"ContainerStarted","Data":"b1a22939542337aa163964b79225d46751e678cb4a6f324c8801141d508bb8c3"} Dec 03 18:03:10 crc kubenswrapper[4687]: I1203 18:03:10.020554 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 03 18:03:10 crc kubenswrapper[4687]: I1203 18:03:10.022534 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerStarted","Data":"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1"} Dec 03 18:03:10 crc kubenswrapper[4687]: I1203 18:03:10.055479 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.786907494 podStartE2EDuration="20.055448197s" podCreationTimestamp="2025-12-03 18:02:50 +0000 UTC" firstStartedPulling="2025-12-03 18:02:50.898873311 +0000 UTC m=+1403.789568744" lastFinishedPulling="2025-12-03 18:03:09.167413994 +0000 UTC m=+1422.058109447" observedRunningTime="2025-12-03 18:03:10.045398735 +0000 UTC m=+1422.936094168" watchObservedRunningTime="2025-12-03 18:03:10.055448197 +0000 UTC m=+1422.946143640" Dec 03 18:03:10 crc kubenswrapper[4687]: I1203 18:03:10.079698 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d99lx" podStartSLOduration=3.611272289 podStartE2EDuration="7.079673062s" podCreationTimestamp="2025-12-03 18:03:03 +0000 UTC" firstStartedPulling="2025-12-03 18:03:05.959892774 +0000 UTC m=+1418.850588207" lastFinishedPulling="2025-12-03 18:03:09.428293527 +0000 UTC m=+1422.318988980" observedRunningTime="2025-12-03 18:03:10.068452558 +0000 UTC m=+1422.959148001" watchObservedRunningTime="2025-12-03 18:03:10.079673062 +0000 UTC m=+1422.970368495" Dec 03 18:03:14 crc kubenswrapper[4687]: I1203 18:03:14.111861 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:03:14 crc kubenswrapper[4687]: I1203 18:03:14.112427 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:03:14 crc kubenswrapper[4687]: I1203 18:03:14.114075 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:14 crc kubenswrapper[4687]: I1203 18:03:14.114150 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.154538 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d99lx" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="registry-server" probeResult="failure" output=< Dec 03 18:03:15 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:03:15 crc kubenswrapper[4687]: > Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.661318 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.733089 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.733919 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="dnsmasq-dns" containerID="cri-o://1d82834b7d62075219bc3b8298abe804b32ef5397a7ada4229b8eada8623caa5" gracePeriod=10 Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.855084 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-dpnwg"] Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.856722 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.885876 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-dpnwg"] Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.942876 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv2vq\" (UniqueName: \"kubernetes.io/projected/23a0d543-20cc-4b95-9f11-12b55442b95e-kube-api-access-rv2vq\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.942921 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.942948 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.942968 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-svc\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.943222 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-config\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.943462 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:15 crc kubenswrapper[4687]: I1203 18:03:15.943508 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045466 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045507 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-svc\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045560 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-config\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045622 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045641 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045698 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv2vq\" (UniqueName: \"kubernetes.io/projected/23a0d543-20cc-4b95-9f11-12b55442b95e-kube-api-access-rv2vq\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.045721 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.046883 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.046907 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-svc\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.047269 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.047349 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.047703 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-config\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.050867 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/23a0d543-20cc-4b95-9f11-12b55442b95e-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.079605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv2vq\" (UniqueName: \"kubernetes.io/projected/23a0d543-20cc-4b95-9f11-12b55442b95e-kube-api-access-rv2vq\") pod \"dnsmasq-dns-55478c4467-dpnwg\" (UID: \"23a0d543-20cc-4b95-9f11-12b55442b95e\") " pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.093197 4687 generic.go:334] "Generic (PLEG): container finished" podID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerID="1d82834b7d62075219bc3b8298abe804b32ef5397a7ada4229b8eada8623caa5" exitCode=0 Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.093245 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" event={"ID":"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba","Type":"ContainerDied","Data":"1d82834b7d62075219bc3b8298abe804b32ef5397a7ada4229b8eada8623caa5"} Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.214966 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.315932 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451270 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451451 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451666 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451728 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gpxj\" (UniqueName: \"kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451770 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.451801 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb\") pod \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\" (UID: \"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba\") " Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.459665 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj" (OuterVolumeSpecName: "kube-api-access-5gpxj") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "kube-api-access-5gpxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.502743 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config" (OuterVolumeSpecName: "config") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.502950 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.505326 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.509495 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.516976 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" (UID: "3bb7f8f0-2702-4b86-be5d-b7f2957e08ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554309 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554342 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554367 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gpxj\" (UniqueName: \"kubernetes.io/projected/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-kube-api-access-5gpxj\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554380 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554388 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.554396 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:16 crc kubenswrapper[4687]: I1203 18:03:16.674582 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-dpnwg"] Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.103456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" event={"ID":"3bb7f8f0-2702-4b86-be5d-b7f2957e08ba","Type":"ContainerDied","Data":"fd0a4dad4e1afa60a655ef3cb03a009ed795f8dce0c3d61c429ffc0bf701210f"} Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.103535 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-jr2rp" Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.103842 4687 scope.go:117] "RemoveContainer" containerID="1d82834b7d62075219bc3b8298abe804b32ef5397a7ada4229b8eada8623caa5" Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.105022 4687 generic.go:334] "Generic (PLEG): container finished" podID="23a0d543-20cc-4b95-9f11-12b55442b95e" containerID="d064f7f722e49f0b84c30b2564552f2fd9bc34220bbcf30edf6aa762d7398aa2" exitCode=0 Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.105052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" event={"ID":"23a0d543-20cc-4b95-9f11-12b55442b95e","Type":"ContainerDied","Data":"d064f7f722e49f0b84c30b2564552f2fd9bc34220bbcf30edf6aa762d7398aa2"} Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.105104 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" event={"ID":"23a0d543-20cc-4b95-9f11-12b55442b95e","Type":"ContainerStarted","Data":"337e4185bd55e66a015bc2749bbc8261e7ae5921e9c1d5e99dda3ab427444b8c"} Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.124065 4687 scope.go:117] "RemoveContainer" containerID="1f008fa1ba4a11ac3c415a58884fad052498627ef9d69cfa7c42592fe29da64a" Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.349360 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.358287 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-jr2rp"] Dec 03 18:03:17 crc kubenswrapper[4687]: I1203 18:03:17.418242 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" path="/var/lib/kubelet/pods/3bb7f8f0-2702-4b86-be5d-b7f2957e08ba/volumes" Dec 03 18:03:18 crc kubenswrapper[4687]: I1203 18:03:18.116375 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" event={"ID":"23a0d543-20cc-4b95-9f11-12b55442b95e","Type":"ContainerStarted","Data":"35a09a052739a9fc85be190bf0e70a58ae9460c4147b71885d9c8c2555431bd3"} Dec 03 18:03:18 crc kubenswrapper[4687]: I1203 18:03:18.116924 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:20 crc kubenswrapper[4687]: I1203 18:03:20.456773 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 03 18:03:20 crc kubenswrapper[4687]: I1203 18:03:20.493559 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" podStartSLOduration=5.493537358 podStartE2EDuration="5.493537358s" podCreationTimestamp="2025-12-03 18:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:03:18.136169744 +0000 UTC m=+1431.026865187" watchObservedRunningTime="2025-12-03 18:03:20.493537358 +0000 UTC m=+1433.384232801" Dec 03 18:03:24 crc kubenswrapper[4687]: I1203 18:03:24.173577 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:24 crc kubenswrapper[4687]: I1203 18:03:24.234294 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:24 crc kubenswrapper[4687]: I1203 18:03:24.420901 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.200996 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d99lx" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="registry-server" containerID="cri-o://6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1" gracePeriod=2 Dec 03 18:03:25 crc kubenswrapper[4687]: E1203 18:03:25.353263 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab922dd_3caa_4df0_9f18_347140283827.slice/crio-conmon-6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab922dd_3caa_4df0_9f18_347140283827.slice/crio-6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1.scope\": RecentStats: unable to find data in memory cache]" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.646781 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.734580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities\") pod \"8ab922dd-3caa-4df0-9f18-347140283827\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.734782 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content\") pod \"8ab922dd-3caa-4df0-9f18-347140283827\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.734808 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhx67\" (UniqueName: \"kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67\") pod \"8ab922dd-3caa-4df0-9f18-347140283827\" (UID: \"8ab922dd-3caa-4df0-9f18-347140283827\") " Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.735682 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities" (OuterVolumeSpecName: "utilities") pod "8ab922dd-3caa-4df0-9f18-347140283827" (UID: "8ab922dd-3caa-4df0-9f18-347140283827"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.742415 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67" (OuterVolumeSpecName: "kube-api-access-mhx67") pod "8ab922dd-3caa-4df0-9f18-347140283827" (UID: "8ab922dd-3caa-4df0-9f18-347140283827"). InnerVolumeSpecName "kube-api-access-mhx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.837273 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhx67\" (UniqueName: \"kubernetes.io/projected/8ab922dd-3caa-4df0-9f18-347140283827-kube-api-access-mhx67\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.837313 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.848113 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ab922dd-3caa-4df0-9f18-347140283827" (UID: "8ab922dd-3caa-4df0-9f18-347140283827"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:03:25 crc kubenswrapper[4687]: I1203 18:03:25.939501 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab922dd-3caa-4df0-9f18-347140283827-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.217467 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-dpnwg" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.218823 4687 generic.go:334] "Generic (PLEG): container finished" podID="8ab922dd-3caa-4df0-9f18-347140283827" containerID="6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1" exitCode=0 Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.218863 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d99lx" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.218871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerDied","Data":"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1"} Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.218912 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d99lx" event={"ID":"8ab922dd-3caa-4df0-9f18-347140283827","Type":"ContainerDied","Data":"1980b54377c5e8446886fc2ed3b76b2b6321720c0b4125d77236332d8cb4ad87"} Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.218934 4687 scope.go:117] "RemoveContainer" containerID="6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.249024 4687 scope.go:117] "RemoveContainer" containerID="0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.279693 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.290469 4687 scope.go:117] "RemoveContainer" containerID="466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.291773 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d99lx"] Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.301386 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.301648 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="dnsmasq-dns" containerID="cri-o://ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec" gracePeriod=10 Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.349399 4687 scope.go:117] "RemoveContainer" containerID="6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1" Dec 03 18:03:26 crc kubenswrapper[4687]: E1203 18:03:26.351270 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1\": container with ID starting with 6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1 not found: ID does not exist" containerID="6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.351750 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1"} err="failed to get container status \"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1\": rpc error: code = NotFound desc = could not find container \"6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1\": container with ID starting with 6b42d2f27ef6fe75c1963c22793f3ba354b16667bf3c0f546f3803cfb9d49eb1 not found: ID does not exist" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.351873 4687 scope.go:117] "RemoveContainer" containerID="0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5" Dec 03 18:03:26 crc kubenswrapper[4687]: E1203 18:03:26.353973 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5\": container with ID starting with 0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5 not found: ID does not exist" containerID="0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.354159 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5"} err="failed to get container status \"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5\": rpc error: code = NotFound desc = could not find container \"0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5\": container with ID starting with 0e421cabff02bd897c74a5de78d1a49e5c7315367c5daa1f0fc8765a4f2b7df5 not found: ID does not exist" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.354281 4687 scope.go:117] "RemoveContainer" containerID="466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc" Dec 03 18:03:26 crc kubenswrapper[4687]: E1203 18:03:26.354721 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc\": container with ID starting with 466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc not found: ID does not exist" containerID="466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.354776 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc"} err="failed to get container status \"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc\": rpc error: code = NotFound desc = could not find container \"466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc\": container with ID starting with 466cb04b3b8aef62003e577548a5f77ab938a50714d7d82d99b29a9ae33d15bc not found: ID does not exist" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.811738 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858243 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858337 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnfz\" (UniqueName: \"kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858477 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858563 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858615 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858659 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.858879 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam\") pod \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\" (UID: \"a376077f-3fb9-4af0-b925-8c2e06aaa5f1\") " Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.866497 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz" (OuterVolumeSpecName: "kube-api-access-7dnfz") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "kube-api-access-7dnfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.915984 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.916066 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.922820 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.928693 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config" (OuterVolumeSpecName: "config") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.932639 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.946901 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a376077f-3fb9-4af0-b925-8c2e06aaa5f1" (UID: "a376077f-3fb9-4af0-b925-8c2e06aaa5f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961229 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961268 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961278 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnfz\" (UniqueName: \"kubernetes.io/projected/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-kube-api-access-7dnfz\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961290 4687 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961300 4687 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961308 4687 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:26 crc kubenswrapper[4687]: I1203 18:03:26.961316 4687 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a376077f-3fb9-4af0-b925-8c2e06aaa5f1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.231110 4687 generic.go:334] "Generic (PLEG): container finished" podID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerID="ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec" exitCode=0 Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.231176 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.231197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" event={"ID":"a376077f-3fb9-4af0-b925-8c2e06aaa5f1","Type":"ContainerDied","Data":"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec"} Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.231234 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-2hbx9" event={"ID":"a376077f-3fb9-4af0-b925-8c2e06aaa5f1","Type":"ContainerDied","Data":"09ec2b2fabe2d0d3d5e9c783ed4366bf0f3743632aa2467b95cdd5ae07cd30fd"} Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.231268 4687 scope.go:117] "RemoveContainer" containerID="ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.265829 4687 scope.go:117] "RemoveContainer" containerID="9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.269186 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.282016 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-2hbx9"] Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.293947 4687 scope.go:117] "RemoveContainer" containerID="ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec" Dec 03 18:03:27 crc kubenswrapper[4687]: E1203 18:03:27.294626 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec\": container with ID starting with ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec not found: ID does not exist" containerID="ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.294664 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec"} err="failed to get container status \"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec\": rpc error: code = NotFound desc = could not find container \"ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec\": container with ID starting with ca2292bd379a9d280b2731f8e0bfdde83cc978b685793a4718345e72a5ad2aec not found: ID does not exist" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.294705 4687 scope.go:117] "RemoveContainer" containerID="9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0" Dec 03 18:03:27 crc kubenswrapper[4687]: E1203 18:03:27.295187 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0\": container with ID starting with 9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0 not found: ID does not exist" containerID="9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.295233 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0"} err="failed to get container status \"9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0\": rpc error: code = NotFound desc = could not find container \"9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0\": container with ID starting with 9e230010997d1878893e0f9d6082a41d378ece2c73496a8f1f5fdf7dbe5a98f0 not found: ID does not exist" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.420318 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab922dd-3caa-4df0-9f18-347140283827" path="/var/lib/kubelet/pods/8ab922dd-3caa-4df0-9f18-347140283827/volumes" Dec 03 18:03:27 crc kubenswrapper[4687]: I1203 18:03:27.422562 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" path="/var/lib/kubelet/pods/a376077f-3fb9-4af0-b925-8c2e06aaa5f1/volumes" Dec 03 18:03:38 crc kubenswrapper[4687]: I1203 18:03:38.367668 4687 generic.go:334] "Generic (PLEG): container finished" podID="bef36ed8-b2b0-465c-9719-c9ff963dcd2f" containerID="d944efcd63b0452655c8ecb02feedc4c97dd7179be46fe9d9cf6b9e9ec567785" exitCode=0 Dec 03 18:03:38 crc kubenswrapper[4687]: I1203 18:03:38.367905 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bef36ed8-b2b0-465c-9719-c9ff963dcd2f","Type":"ContainerDied","Data":"d944efcd63b0452655c8ecb02feedc4c97dd7179be46fe9d9cf6b9e9ec567785"} Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.309581 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8"] Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310345 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="init" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310362 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="init" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310376 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310383 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310393 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310399 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310414 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="init" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310419 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="init" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310432 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="extract-utilities" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310439 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="extract-utilities" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310451 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="registry-server" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310457 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="registry-server" Dec 03 18:03:39 crc kubenswrapper[4687]: E1203 18:03:39.310472 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="extract-content" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310478 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="extract-content" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310663 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bb7f8f0-2702-4b86-be5d-b7f2957e08ba" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310677 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab922dd-3caa-4df0-9f18-347140283827" containerName="registry-server" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.310688 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a376077f-3fb9-4af0-b925-8c2e06aaa5f1" containerName="dnsmasq-dns" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.311254 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.316164 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.316245 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.316512 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.318453 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.330005 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8"] Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.379786 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bef36ed8-b2b0-465c-9719-c9ff963dcd2f","Type":"ContainerStarted","Data":"8d23b792cf131b06976f8133450fdc19f2f9010022a44378310a72905a7ae08e"} Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.380067 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.402021 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.402002396 podStartE2EDuration="36.402002396s" podCreationTimestamp="2025-12-03 18:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:03:39.399106098 +0000 UTC m=+1452.289801531" watchObservedRunningTime="2025-12-03 18:03:39.402002396 +0000 UTC m=+1452.292697829" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.493515 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.493712 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8chf\" (UniqueName: \"kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.493775 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.493835 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.595875 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8chf\" (UniqueName: \"kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.595977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.596048 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.596199 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.600918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.601354 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.610559 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.625930 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8chf\" (UniqueName: \"kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:39 crc kubenswrapper[4687]: I1203 18:03:39.632681 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:03:40 crc kubenswrapper[4687]: I1203 18:03:40.163759 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8"] Dec 03 18:03:40 crc kubenswrapper[4687]: W1203 18:03:40.168292 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod416ff6ab_b4d6_451c_8219_1db28ce18f92.slice/crio-2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78 WatchSource:0}: Error finding container 2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78: Status 404 returned error can't find the container with id 2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78 Dec 03 18:03:40 crc kubenswrapper[4687]: I1203 18:03:40.399996 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" event={"ID":"416ff6ab-b4d6-451c-8219-1db28ce18f92","Type":"ContainerStarted","Data":"2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78"} Dec 03 18:03:40 crc kubenswrapper[4687]: I1203 18:03:40.402915 4687 generic.go:334] "Generic (PLEG): container finished" podID="b31a63e3-b46e-403c-b1b4-3acd833f453f" containerID="fe1497c370b5fa9c2635601dd99e9555e46d1e8ffe9dbf37c48b117741f0aed5" exitCode=0 Dec 03 18:03:40 crc kubenswrapper[4687]: I1203 18:03:40.404975 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b31a63e3-b46e-403c-b1b4-3acd833f453f","Type":"ContainerDied","Data":"fe1497c370b5fa9c2635601dd99e9555e46d1e8ffe9dbf37c48b117741f0aed5"} Dec 03 18:03:41 crc kubenswrapper[4687]: I1203 18:03:41.426266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b31a63e3-b46e-403c-b1b4-3acd833f453f","Type":"ContainerStarted","Data":"5a6386a07a7da83d4632a3e4c4472a6a98736dc9bd7fa24fe6d3f5d28df2e0ee"} Dec 03 18:03:41 crc kubenswrapper[4687]: I1203 18:03:41.426786 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:03:41 crc kubenswrapper[4687]: I1203 18:03:41.454498 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.454479555 podStartE2EDuration="38.454479555s" podCreationTimestamp="2025-12-03 18:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:03:41.450451596 +0000 UTC m=+1454.341147049" watchObservedRunningTime="2025-12-03 18:03:41.454479555 +0000 UTC m=+1454.345174988" Dec 03 18:03:44 crc kubenswrapper[4687]: I1203 18:03:44.111509 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:03:44 crc kubenswrapper[4687]: I1203 18:03:44.111886 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:03:50 crc kubenswrapper[4687]: I1203 18:03:50.530216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" event={"ID":"416ff6ab-b4d6-451c-8219-1db28ce18f92","Type":"ContainerStarted","Data":"c069eda9915d1d5def1d94b5318c9bce565b1db545a284f2868030eb2aab3159"} Dec 03 18:03:50 crc kubenswrapper[4687]: I1203 18:03:50.546726 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" podStartSLOduration=1.99852394 podStartE2EDuration="11.546700298s" podCreationTimestamp="2025-12-03 18:03:39 +0000 UTC" firstStartedPulling="2025-12-03 18:03:40.170996966 +0000 UTC m=+1453.061692399" lastFinishedPulling="2025-12-03 18:03:49.719173324 +0000 UTC m=+1462.609868757" observedRunningTime="2025-12-03 18:03:50.544287273 +0000 UTC m=+1463.434982706" watchObservedRunningTime="2025-12-03 18:03:50.546700298 +0000 UTC m=+1463.437395751" Dec 03 18:03:53 crc kubenswrapper[4687]: I1203 18:03:53.600342 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 03 18:03:54 crc kubenswrapper[4687]: I1203 18:03:54.300349 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 03 18:04:03 crc kubenswrapper[4687]: I1203 18:04:03.650450 4687 generic.go:334] "Generic (PLEG): container finished" podID="416ff6ab-b4d6-451c-8219-1db28ce18f92" containerID="c069eda9915d1d5def1d94b5318c9bce565b1db545a284f2868030eb2aab3159" exitCode=0 Dec 03 18:04:03 crc kubenswrapper[4687]: I1203 18:04:03.650535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" event={"ID":"416ff6ab-b4d6-451c-8219-1db28ce18f92","Type":"ContainerDied","Data":"c069eda9915d1d5def1d94b5318c9bce565b1db545a284f2868030eb2aab3159"} Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.102355 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.208318 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory\") pod \"416ff6ab-b4d6-451c-8219-1db28ce18f92\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.208376 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key\") pod \"416ff6ab-b4d6-451c-8219-1db28ce18f92\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.208447 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8chf\" (UniqueName: \"kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf\") pod \"416ff6ab-b4d6-451c-8219-1db28ce18f92\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.208614 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle\") pod \"416ff6ab-b4d6-451c-8219-1db28ce18f92\" (UID: \"416ff6ab-b4d6-451c-8219-1db28ce18f92\") " Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.215771 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "416ff6ab-b4d6-451c-8219-1db28ce18f92" (UID: "416ff6ab-b4d6-451c-8219-1db28ce18f92"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.215938 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf" (OuterVolumeSpecName: "kube-api-access-l8chf") pod "416ff6ab-b4d6-451c-8219-1db28ce18f92" (UID: "416ff6ab-b4d6-451c-8219-1db28ce18f92"). InnerVolumeSpecName "kube-api-access-l8chf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.249420 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "416ff6ab-b4d6-451c-8219-1db28ce18f92" (UID: "416ff6ab-b4d6-451c-8219-1db28ce18f92"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.255821 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory" (OuterVolumeSpecName: "inventory") pod "416ff6ab-b4d6-451c-8219-1db28ce18f92" (UID: "416ff6ab-b4d6-451c-8219-1db28ce18f92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.310700 4687 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.310767 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.310777 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/416ff6ab-b4d6-451c-8219-1db28ce18f92-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.310785 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8chf\" (UniqueName: \"kubernetes.io/projected/416ff6ab-b4d6-451c-8219-1db28ce18f92-kube-api-access-l8chf\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.685043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" event={"ID":"416ff6ab-b4d6-451c-8219-1db28ce18f92","Type":"ContainerDied","Data":"2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78"} Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.685108 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2086180fd692573ecdd9e54f8bc6a0d70be2d897de3a6cd2a4258347c2cc4c78" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.685226 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.834754 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2"] Dec 03 18:04:05 crc kubenswrapper[4687]: E1203 18:04:05.837202 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416ff6ab-b4d6-451c-8219-1db28ce18f92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.837253 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="416ff6ab-b4d6-451c-8219-1db28ce18f92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.838035 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="416ff6ab-b4d6-451c-8219-1db28ce18f92" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.839623 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.842439 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.843315 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.843534 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.843814 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.873388 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2"] Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.927981 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txwj\" (UniqueName: \"kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.928446 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:05 crc kubenswrapper[4687]: I1203 18:04:05.928583 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.031012 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.031096 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txwj\" (UniqueName: \"kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.031171 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.036020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.037840 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.060617 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txwj\" (UniqueName: \"kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-lrxk2\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.162989 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:06 crc kubenswrapper[4687]: I1203 18:04:06.713119 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2"] Dec 03 18:04:07 crc kubenswrapper[4687]: I1203 18:04:07.709789 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" event={"ID":"788d4c10-cc61-4086-8e29-6dcdf6592f4a","Type":"ContainerStarted","Data":"1f8f19fe882b0f38b3317487a9e653f421be58e3917da6db74f27dba885426d2"} Dec 03 18:04:07 crc kubenswrapper[4687]: I1203 18:04:07.710160 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" event={"ID":"788d4c10-cc61-4086-8e29-6dcdf6592f4a","Type":"ContainerStarted","Data":"b1bb115f1788a3630a2b552b66dc829496f618afdb41e485fe0fee435986b2af"} Dec 03 18:04:07 crc kubenswrapper[4687]: I1203 18:04:07.731671 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" podStartSLOduration=2.26648695 podStartE2EDuration="2.731647794s" podCreationTimestamp="2025-12-03 18:04:05 +0000 UTC" firstStartedPulling="2025-12-03 18:04:06.719503741 +0000 UTC m=+1479.610199174" lastFinishedPulling="2025-12-03 18:04:07.184664585 +0000 UTC m=+1480.075360018" observedRunningTime="2025-12-03 18:04:07.723492033 +0000 UTC m=+1480.614187466" watchObservedRunningTime="2025-12-03 18:04:07.731647794 +0000 UTC m=+1480.622343227" Dec 03 18:04:10 crc kubenswrapper[4687]: I1203 18:04:10.744876 4687 generic.go:334] "Generic (PLEG): container finished" podID="788d4c10-cc61-4086-8e29-6dcdf6592f4a" containerID="1f8f19fe882b0f38b3317487a9e653f421be58e3917da6db74f27dba885426d2" exitCode=0 Dec 03 18:04:10 crc kubenswrapper[4687]: I1203 18:04:10.744951 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" event={"ID":"788d4c10-cc61-4086-8e29-6dcdf6592f4a","Type":"ContainerDied","Data":"1f8f19fe882b0f38b3317487a9e653f421be58e3917da6db74f27dba885426d2"} Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.304427 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.370951 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key\") pod \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.371253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5txwj\" (UniqueName: \"kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj\") pod \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.371573 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory\") pod \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\" (UID: \"788d4c10-cc61-4086-8e29-6dcdf6592f4a\") " Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.390907 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj" (OuterVolumeSpecName: "kube-api-access-5txwj") pod "788d4c10-cc61-4086-8e29-6dcdf6592f4a" (UID: "788d4c10-cc61-4086-8e29-6dcdf6592f4a"). InnerVolumeSpecName "kube-api-access-5txwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.402280 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory" (OuterVolumeSpecName: "inventory") pod "788d4c10-cc61-4086-8e29-6dcdf6592f4a" (UID: "788d4c10-cc61-4086-8e29-6dcdf6592f4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.417387 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "788d4c10-cc61-4086-8e29-6dcdf6592f4a" (UID: "788d4c10-cc61-4086-8e29-6dcdf6592f4a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.474481 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.474522 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/788d4c10-cc61-4086-8e29-6dcdf6592f4a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.474535 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5txwj\" (UniqueName: \"kubernetes.io/projected/788d4c10-cc61-4086-8e29-6dcdf6592f4a-kube-api-access-5txwj\") on node \"crc\" DevicePath \"\"" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.774699 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" event={"ID":"788d4c10-cc61-4086-8e29-6dcdf6592f4a","Type":"ContainerDied","Data":"b1bb115f1788a3630a2b552b66dc829496f618afdb41e485fe0fee435986b2af"} Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.774750 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1bb115f1788a3630a2b552b66dc829496f618afdb41e485fe0fee435986b2af" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.774801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-lrxk2" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.856864 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf"] Dec 03 18:04:12 crc kubenswrapper[4687]: E1203 18:04:12.859318 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d4c10-cc61-4086-8e29-6dcdf6592f4a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.859704 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d4c10-cc61-4086-8e29-6dcdf6592f4a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.861671 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d4c10-cc61-4086-8e29-6dcdf6592f4a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.863353 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.867148 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.867243 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.867401 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.867507 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.887765 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.887847 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.887967 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xjd\" (UniqueName: \"kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.887998 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.889582 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf"] Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.989569 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.989684 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8xjd\" (UniqueName: \"kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.989716 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:12 crc kubenswrapper[4687]: I1203 18:04:12.989778 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:12.996300 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:12.996551 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:12.996536 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:13.021871 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8xjd\" (UniqueName: \"kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:13.188991 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:04:13 crc kubenswrapper[4687]: I1203 18:04:13.837245 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf"] Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.111909 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.111977 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.112023 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.112760 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.112816 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" gracePeriod=600 Dec 03 18:04:14 crc kubenswrapper[4687]: E1203 18:04:14.242110 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.795916 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" exitCode=0 Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.795974 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9"} Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.796337 4687 scope.go:117] "RemoveContainer" containerID="db902a5bffdbf33c8da58cdee4ed48423a21c1c42eeecaaf4efe21343a963605" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.797362 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:04:14 crc kubenswrapper[4687]: E1203 18:04:14.797827 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.799632 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" event={"ID":"6dcace96-ba84-4176-9fa0-216e86ae113b","Type":"ContainerStarted","Data":"546ebd9bbf7e3c53c8075c0d962d3661b5800159b693a26542c4b72cf2770f58"} Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.799697 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" event={"ID":"6dcace96-ba84-4176-9fa0-216e86ae113b","Type":"ContainerStarted","Data":"c18debc623744ca4de66b52df7f5f89b8a2f8e4b9b4d2eec6afdd775a9b49583"} Dec 03 18:04:14 crc kubenswrapper[4687]: I1203 18:04:14.846777 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" podStartSLOduration=2.317277873 podStartE2EDuration="2.846756239s" podCreationTimestamp="2025-12-03 18:04:12 +0000 UTC" firstStartedPulling="2025-12-03 18:04:13.830210187 +0000 UTC m=+1486.720905630" lastFinishedPulling="2025-12-03 18:04:14.359688563 +0000 UTC m=+1487.250383996" observedRunningTime="2025-12-03 18:04:14.839286027 +0000 UTC m=+1487.729981460" watchObservedRunningTime="2025-12-03 18:04:14.846756239 +0000 UTC m=+1487.737451682" Dec 03 18:04:28 crc kubenswrapper[4687]: I1203 18:04:28.410967 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:04:28 crc kubenswrapper[4687]: E1203 18:04:28.412448 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:04:38 crc kubenswrapper[4687]: I1203 18:04:38.592778 4687 scope.go:117] "RemoveContainer" containerID="bde4df2cb5523fa2ccdbeb36fb7889e5cc3ec53ddbd8aa5cafb52fd5e9c002e8" Dec 03 18:04:39 crc kubenswrapper[4687]: I1203 18:04:39.408179 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:04:39 crc kubenswrapper[4687]: E1203 18:04:39.409203 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:04:50 crc kubenswrapper[4687]: I1203 18:04:50.407828 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:04:50 crc kubenswrapper[4687]: E1203 18:04:50.409097 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:05:02 crc kubenswrapper[4687]: I1203 18:05:02.407205 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:05:02 crc kubenswrapper[4687]: E1203 18:05:02.408044 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:05:15 crc kubenswrapper[4687]: I1203 18:05:15.407621 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:05:15 crc kubenswrapper[4687]: E1203 18:05:15.408707 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:05:28 crc kubenswrapper[4687]: I1203 18:05:28.407827 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:05:28 crc kubenswrapper[4687]: E1203 18:05:28.408503 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:05:38 crc kubenswrapper[4687]: I1203 18:05:38.692718 4687 scope.go:117] "RemoveContainer" containerID="1249deec1eb1764e6a6e1535920ec9a99f9cb675e451ca77274c35242f90287a" Dec 03 18:05:38 crc kubenswrapper[4687]: I1203 18:05:38.734923 4687 scope.go:117] "RemoveContainer" containerID="2500bb0115d90671aac1ea144a4b3a848d70e4aa19b5292498a410bfdb36ae26" Dec 03 18:05:42 crc kubenswrapper[4687]: I1203 18:05:42.407587 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:05:42 crc kubenswrapper[4687]: E1203 18:05:42.408089 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:05:54 crc kubenswrapper[4687]: I1203 18:05:54.407394 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:05:54 crc kubenswrapper[4687]: E1203 18:05:54.408267 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:06:08 crc kubenswrapper[4687]: I1203 18:06:08.407955 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:06:08 crc kubenswrapper[4687]: E1203 18:06:08.408925 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:06:21 crc kubenswrapper[4687]: I1203 18:06:21.408087 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:06:21 crc kubenswrapper[4687]: E1203 18:06:21.408835 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.576365 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.578997 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.593937 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.630236 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6xg\" (UniqueName: \"kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.630631 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.630680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.732402 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.732469 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.732537 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6xg\" (UniqueName: \"kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.733020 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.733138 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.752989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6xg\" (UniqueName: \"kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg\") pod \"certified-operators-crbjm\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:25 crc kubenswrapper[4687]: I1203 18:06:25.943520 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:26 crc kubenswrapper[4687]: W1203 18:06:26.428526 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod891f8f3f_caf9_49f2_bbe0_1c53f9583215.slice/crio-9b69ec66d6480e8410b4aac712a2d21100229a407acaec14d1aedb3a3a7355f5 WatchSource:0}: Error finding container 9b69ec66d6480e8410b4aac712a2d21100229a407acaec14d1aedb3a3a7355f5: Status 404 returned error can't find the container with id 9b69ec66d6480e8410b4aac712a2d21100229a407acaec14d1aedb3a3a7355f5 Dec 03 18:06:26 crc kubenswrapper[4687]: I1203 18:06:26.433688 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:27 crc kubenswrapper[4687]: I1203 18:06:27.183234 4687 generic.go:334] "Generic (PLEG): container finished" podID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerID="209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df" exitCode=0 Dec 03 18:06:27 crc kubenswrapper[4687]: I1203 18:06:27.183421 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerDied","Data":"209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df"} Dec 03 18:06:27 crc kubenswrapper[4687]: I1203 18:06:27.183515 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerStarted","Data":"9b69ec66d6480e8410b4aac712a2d21100229a407acaec14d1aedb3a3a7355f5"} Dec 03 18:06:29 crc kubenswrapper[4687]: I1203 18:06:29.203023 4687 generic.go:334] "Generic (PLEG): container finished" podID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerID="13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2" exitCode=0 Dec 03 18:06:29 crc kubenswrapper[4687]: I1203 18:06:29.203107 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerDied","Data":"13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2"} Dec 03 18:06:30 crc kubenswrapper[4687]: I1203 18:06:30.215738 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerStarted","Data":"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8"} Dec 03 18:06:30 crc kubenswrapper[4687]: I1203 18:06:30.238069 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crbjm" podStartSLOduration=2.499116673 podStartE2EDuration="5.238045257s" podCreationTimestamp="2025-12-03 18:06:25 +0000 UTC" firstStartedPulling="2025-12-03 18:06:27.186205841 +0000 UTC m=+1620.076901274" lastFinishedPulling="2025-12-03 18:06:29.925134425 +0000 UTC m=+1622.815829858" observedRunningTime="2025-12-03 18:06:30.232160617 +0000 UTC m=+1623.122856040" watchObservedRunningTime="2025-12-03 18:06:30.238045257 +0000 UTC m=+1623.128740690" Dec 03 18:06:34 crc kubenswrapper[4687]: I1203 18:06:34.407068 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:06:34 crc kubenswrapper[4687]: E1203 18:06:34.407916 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:06:35 crc kubenswrapper[4687]: I1203 18:06:35.944815 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:35 crc kubenswrapper[4687]: I1203 18:06:35.945213 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:36 crc kubenswrapper[4687]: I1203 18:06:36.001358 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:36 crc kubenswrapper[4687]: I1203 18:06:36.318842 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:36 crc kubenswrapper[4687]: I1203 18:06:36.376046 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.285585 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crbjm" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="registry-server" containerID="cri-o://836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8" gracePeriod=2 Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.743199 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.849012 4687 scope.go:117] "RemoveContainer" containerID="a5aa894c9bbdccd70848e601c4e6cae124fa13fa1f369dd6f51e728d93bb70d0" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.852714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities\") pod \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.852803 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content\") pod \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.853001 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt6xg\" (UniqueName: \"kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg\") pod \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\" (UID: \"891f8f3f-caf9-49f2-bbe0-1c53f9583215\") " Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.853494 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities" (OuterVolumeSpecName: "utilities") pod "891f8f3f-caf9-49f2-bbe0-1c53f9583215" (UID: "891f8f3f-caf9-49f2-bbe0-1c53f9583215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.853581 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.858174 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg" (OuterVolumeSpecName: "kube-api-access-dt6xg") pod "891f8f3f-caf9-49f2-bbe0-1c53f9583215" (UID: "891f8f3f-caf9-49f2-bbe0-1c53f9583215"). InnerVolumeSpecName "kube-api-access-dt6xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.877602 4687 scope.go:117] "RemoveContainer" containerID="b6af99c6de502a951fa5bd0b921b8ce45bee92dc0204bb6dcfbc3f1e775bdb1e" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.900267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "891f8f3f-caf9-49f2-bbe0-1c53f9583215" (UID: "891f8f3f-caf9-49f2-bbe0-1c53f9583215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.955314 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt6xg\" (UniqueName: \"kubernetes.io/projected/891f8f3f-caf9-49f2-bbe0-1c53f9583215-kube-api-access-dt6xg\") on node \"crc\" DevicePath \"\"" Dec 03 18:06:38 crc kubenswrapper[4687]: I1203 18:06:38.955367 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/891f8f3f-caf9-49f2-bbe0-1c53f9583215-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.302034 4687 generic.go:334] "Generic (PLEG): container finished" podID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerID="836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8" exitCode=0 Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.302094 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerDied","Data":"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8"} Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.302192 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crbjm" event={"ID":"891f8f3f-caf9-49f2-bbe0-1c53f9583215","Type":"ContainerDied","Data":"9b69ec66d6480e8410b4aac712a2d21100229a407acaec14d1aedb3a3a7355f5"} Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.302233 4687 scope.go:117] "RemoveContainer" containerID="836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.302240 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crbjm" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.322951 4687 scope.go:117] "RemoveContainer" containerID="13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.349212 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.358846 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crbjm"] Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.379753 4687 scope.go:117] "RemoveContainer" containerID="209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.440584 4687 scope.go:117] "RemoveContainer" containerID="836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8" Dec 03 18:06:39 crc kubenswrapper[4687]: E1203 18:06:39.441894 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8\": container with ID starting with 836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8 not found: ID does not exist" containerID="836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.442168 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8"} err="failed to get container status \"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8\": rpc error: code = NotFound desc = could not find container \"836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8\": container with ID starting with 836617cecde4e34c254dc01b1bf9f4e181739cef1d07b444780552f5c0b054e8 not found: ID does not exist" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.442294 4687 scope.go:117] "RemoveContainer" containerID="13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.443135 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" path="/var/lib/kubelet/pods/891f8f3f-caf9-49f2-bbe0-1c53f9583215/volumes" Dec 03 18:06:39 crc kubenswrapper[4687]: E1203 18:06:39.443863 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2\": container with ID starting with 13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2 not found: ID does not exist" containerID="13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.443926 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2"} err="failed to get container status \"13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2\": rpc error: code = NotFound desc = could not find container \"13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2\": container with ID starting with 13c2ed12eaf41034206a7d86f0080569b07e82f9c20334b416be081700a789f2 not found: ID does not exist" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.443979 4687 scope.go:117] "RemoveContainer" containerID="209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df" Dec 03 18:06:39 crc kubenswrapper[4687]: E1203 18:06:39.444854 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df\": container with ID starting with 209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df not found: ID does not exist" containerID="209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df" Dec 03 18:06:39 crc kubenswrapper[4687]: I1203 18:06:39.444888 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df"} err="failed to get container status \"209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df\": rpc error: code = NotFound desc = could not find container \"209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df\": container with ID starting with 209f34c7064da501eb0cb0d47d58c43a2b306aa7300d5483d21555cc7da318df not found: ID does not exist" Dec 03 18:06:45 crc kubenswrapper[4687]: I1203 18:06:45.407557 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:06:45 crc kubenswrapper[4687]: E1203 18:06:45.408406 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:06:58 crc kubenswrapper[4687]: I1203 18:06:58.407397 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:06:58 crc kubenswrapper[4687]: E1203 18:06:58.408328 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:07:09 crc kubenswrapper[4687]: I1203 18:07:09.408834 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:07:09 crc kubenswrapper[4687]: E1203 18:07:09.409589 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.794663 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:13 crc kubenswrapper[4687]: E1203 18:07:13.795986 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="extract-utilities" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.796009 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="extract-utilities" Dec 03 18:07:13 crc kubenswrapper[4687]: E1203 18:07:13.796063 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="registry-server" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.796071 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="registry-server" Dec 03 18:07:13 crc kubenswrapper[4687]: E1203 18:07:13.796093 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="extract-content" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.796306 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="extract-content" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.799827 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="891f8f3f-caf9-49f2-bbe0-1c53f9583215" containerName="registry-server" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.801436 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.815604 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.953850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.954018 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcph5\" (UniqueName: \"kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:13 crc kubenswrapper[4687]: I1203 18:07:13.954109 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.055557 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.055640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcph5\" (UniqueName: \"kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.055686 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.056089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.056162 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.083302 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcph5\" (UniqueName: \"kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5\") pod \"redhat-marketplace-ttr5b\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.127894 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.598001 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:14 crc kubenswrapper[4687]: I1203 18:07:14.660223 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerStarted","Data":"15bb9ed6145abcfbfab88b35fd588002b4f61e51ec65603f74b0dd664d35682a"} Dec 03 18:07:15 crc kubenswrapper[4687]: I1203 18:07:15.671419 4687 generic.go:334] "Generic (PLEG): container finished" podID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerID="2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5" exitCode=0 Dec 03 18:07:15 crc kubenswrapper[4687]: I1203 18:07:15.671646 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerDied","Data":"2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5"} Dec 03 18:07:17 crc kubenswrapper[4687]: I1203 18:07:17.693374 4687 generic.go:334] "Generic (PLEG): container finished" podID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerID="df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141" exitCode=0 Dec 03 18:07:17 crc kubenswrapper[4687]: I1203 18:07:17.693435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerDied","Data":"df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141"} Dec 03 18:07:18 crc kubenswrapper[4687]: I1203 18:07:18.705355 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerStarted","Data":"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5"} Dec 03 18:07:18 crc kubenswrapper[4687]: I1203 18:07:18.731919 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ttr5b" podStartSLOduration=3.306020234 podStartE2EDuration="5.731899532s" podCreationTimestamp="2025-12-03 18:07:13 +0000 UTC" firstStartedPulling="2025-12-03 18:07:15.674212738 +0000 UTC m=+1668.564908171" lastFinishedPulling="2025-12-03 18:07:18.100092026 +0000 UTC m=+1670.990787469" observedRunningTime="2025-12-03 18:07:18.723724901 +0000 UTC m=+1671.614420334" watchObservedRunningTime="2025-12-03 18:07:18.731899532 +0000 UTC m=+1671.622594965" Dec 03 18:07:20 crc kubenswrapper[4687]: I1203 18:07:20.407462 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:07:20 crc kubenswrapper[4687]: E1203 18:07:20.408265 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:07:24 crc kubenswrapper[4687]: I1203 18:07:24.128299 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:24 crc kubenswrapper[4687]: I1203 18:07:24.128693 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:24 crc kubenswrapper[4687]: I1203 18:07:24.196883 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:24 crc kubenswrapper[4687]: I1203 18:07:24.844964 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:24 crc kubenswrapper[4687]: I1203 18:07:24.905962 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:26 crc kubenswrapper[4687]: I1203 18:07:26.798768 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ttr5b" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="registry-server" containerID="cri-o://649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5" gracePeriod=2 Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.248908 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.413015 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content\") pod \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.413424 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcph5\" (UniqueName: \"kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5\") pod \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.413459 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities\") pod \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\" (UID: \"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb\") " Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.415015 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities" (OuterVolumeSpecName: "utilities") pod "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" (UID: "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.419379 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5" (OuterVolumeSpecName: "kube-api-access-fcph5") pod "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" (UID: "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb"). InnerVolumeSpecName "kube-api-access-fcph5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.431395 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" (UID: "96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.515898 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcph5\" (UniqueName: \"kubernetes.io/projected/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-kube-api-access-fcph5\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.515934 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.515946 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.810827 4687 generic.go:334] "Generic (PLEG): container finished" podID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerID="649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5" exitCode=0 Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.810874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerDied","Data":"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5"} Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.810905 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttr5b" event={"ID":"96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb","Type":"ContainerDied","Data":"15bb9ed6145abcfbfab88b35fd588002b4f61e51ec65603f74b0dd664d35682a"} Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.810936 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttr5b" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.810953 4687 scope.go:117] "RemoveContainer" containerID="649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.837277 4687 scope.go:117] "RemoveContainer" containerID="df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.863946 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.873481 4687 scope.go:117] "RemoveContainer" containerID="2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.876094 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttr5b"] Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.924428 4687 scope.go:117] "RemoveContainer" containerID="649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5" Dec 03 18:07:27 crc kubenswrapper[4687]: E1203 18:07:27.924868 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5\": container with ID starting with 649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5 not found: ID does not exist" containerID="649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.924912 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5"} err="failed to get container status \"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5\": rpc error: code = NotFound desc = could not find container \"649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5\": container with ID starting with 649bdbff098a889a1f6869f2f0f0c8b803ce60c95b3c000321aea46a6e662cb5 not found: ID does not exist" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.924939 4687 scope.go:117] "RemoveContainer" containerID="df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141" Dec 03 18:07:27 crc kubenswrapper[4687]: E1203 18:07:27.925258 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141\": container with ID starting with df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141 not found: ID does not exist" containerID="df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.925305 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141"} err="failed to get container status \"df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141\": rpc error: code = NotFound desc = could not find container \"df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141\": container with ID starting with df132109dfc01d132ffe52542364f061c5d0fcaa673348908583d7fd11a43141 not found: ID does not exist" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.925325 4687 scope.go:117] "RemoveContainer" containerID="2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5" Dec 03 18:07:27 crc kubenswrapper[4687]: E1203 18:07:27.925611 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5\": container with ID starting with 2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5 not found: ID does not exist" containerID="2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5" Dec 03 18:07:27 crc kubenswrapper[4687]: I1203 18:07:27.925645 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5"} err="failed to get container status \"2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5\": rpc error: code = NotFound desc = could not find container \"2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5\": container with ID starting with 2a7b45882728ef4bd944c557bf8c83867316dc8a3ed5ea7c1ef3eb606f2e86c5 not found: ID does not exist" Dec 03 18:07:29 crc kubenswrapper[4687]: I1203 18:07:29.419507 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" path="/var/lib/kubelet/pods/96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb/volumes" Dec 03 18:07:32 crc kubenswrapper[4687]: I1203 18:07:32.407573 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:07:32 crc kubenswrapper[4687]: E1203 18:07:32.408001 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:07:38 crc kubenswrapper[4687]: I1203 18:07:38.975155 4687 scope.go:117] "RemoveContainer" containerID="8007580a35839ec827db1fa2ef5a318eca63581ca4205b1d4e2c75d5b3a91650" Dec 03 18:07:38 crc kubenswrapper[4687]: I1203 18:07:38.997930 4687 scope.go:117] "RemoveContainer" containerID="886aee6ab03fc307da8864eba8b01b46dd6030d060501042b506ebab1837d5ea" Dec 03 18:07:39 crc kubenswrapper[4687]: I1203 18:07:39.018576 4687 scope.go:117] "RemoveContainer" containerID="ca9fae0fa75bb007b705c125c9b69e6854c92f603a51767ddc4783554446b264" Dec 03 18:07:39 crc kubenswrapper[4687]: I1203 18:07:39.037341 4687 scope.go:117] "RemoveContainer" containerID="9e99c91874d27ef4815915603e2cee8806d49869070d969f84d43194d4f33e40" Dec 03 18:07:40 crc kubenswrapper[4687]: I1203 18:07:40.946261 4687 generic.go:334] "Generic (PLEG): container finished" podID="6dcace96-ba84-4176-9fa0-216e86ae113b" containerID="546ebd9bbf7e3c53c8075c0d962d3661b5800159b693a26542c4b72cf2770f58" exitCode=0 Dec 03 18:07:40 crc kubenswrapper[4687]: I1203 18:07:40.946358 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" event={"ID":"6dcace96-ba84-4176-9fa0-216e86ae113b","Type":"ContainerDied","Data":"546ebd9bbf7e3c53c8075c0d962d3661b5800159b693a26542c4b72cf2770f58"} Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.471957 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.568251 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key\") pod \"6dcace96-ba84-4176-9fa0-216e86ae113b\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.568370 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle\") pod \"6dcace96-ba84-4176-9fa0-216e86ae113b\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.568442 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory\") pod \"6dcace96-ba84-4176-9fa0-216e86ae113b\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.568572 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8xjd\" (UniqueName: \"kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd\") pod \"6dcace96-ba84-4176-9fa0-216e86ae113b\" (UID: \"6dcace96-ba84-4176-9fa0-216e86ae113b\") " Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.574670 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6dcace96-ba84-4176-9fa0-216e86ae113b" (UID: "6dcace96-ba84-4176-9fa0-216e86ae113b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.586597 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd" (OuterVolumeSpecName: "kube-api-access-z8xjd") pod "6dcace96-ba84-4176-9fa0-216e86ae113b" (UID: "6dcace96-ba84-4176-9fa0-216e86ae113b"). InnerVolumeSpecName "kube-api-access-z8xjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.595944 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory" (OuterVolumeSpecName: "inventory") pod "6dcace96-ba84-4176-9fa0-216e86ae113b" (UID: "6dcace96-ba84-4176-9fa0-216e86ae113b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.606585 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6dcace96-ba84-4176-9fa0-216e86ae113b" (UID: "6dcace96-ba84-4176-9fa0-216e86ae113b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.671080 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.671109 4687 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.671133 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dcace96-ba84-4176-9fa0-216e86ae113b-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.671142 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8xjd\" (UniqueName: \"kubernetes.io/projected/6dcace96-ba84-4176-9fa0-216e86ae113b-kube-api-access-z8xjd\") on node \"crc\" DevicePath \"\"" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.968148 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" event={"ID":"6dcace96-ba84-4176-9fa0-216e86ae113b","Type":"ContainerDied","Data":"c18debc623744ca4de66b52df7f5f89b8a2f8e4b9b4d2eec6afdd775a9b49583"} Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.968420 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18debc623744ca4de66b52df7f5f89b8a2f8e4b9b4d2eec6afdd775a9b49583" Dec 03 18:07:42 crc kubenswrapper[4687]: I1203 18:07:42.968214 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.060626 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw"] Dec 03 18:07:43 crc kubenswrapper[4687]: E1203 18:07:43.061326 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="extract-content" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061356 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="extract-content" Dec 03 18:07:43 crc kubenswrapper[4687]: E1203 18:07:43.061380 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="registry-server" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061406 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="registry-server" Dec 03 18:07:43 crc kubenswrapper[4687]: E1203 18:07:43.061430 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dcace96-ba84-4176-9fa0-216e86ae113b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061445 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dcace96-ba84-4176-9fa0-216e86ae113b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 18:07:43 crc kubenswrapper[4687]: E1203 18:07:43.061494 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="extract-utilities" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061532 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="extract-utilities" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061901 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="96acbf12-6ad5-4db1-bea4-5fcaeea2d2cb" containerName="registry-server" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.061948 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dcace96-ba84-4176-9fa0-216e86ae113b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.062955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.064731 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.065663 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.065777 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.067065 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.073847 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw"] Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.180531 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.180791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44kb\" (UniqueName: \"kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.181106 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.283083 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.283221 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44kb\" (UniqueName: \"kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.283437 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.288076 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.295788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.300798 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44kb\" (UniqueName: \"kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-88lsw\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.381604 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.407906 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:07:43 crc kubenswrapper[4687]: E1203 18:07:43.408280 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.926637 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw"] Dec 03 18:07:43 crc kubenswrapper[4687]: I1203 18:07:43.985456 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" event={"ID":"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f","Type":"ContainerStarted","Data":"371b0f69b2d481a4e58de37f3fe875cb29bbe2cb0b55ea0e3c2348540dc0b4d8"} Dec 03 18:07:44 crc kubenswrapper[4687]: I1203 18:07:44.996305 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" event={"ID":"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f","Type":"ContainerStarted","Data":"9645cfdb9bb87b6799dab5feb0650a94bd9e637c12bda4dd7d8f568c515fc37d"} Dec 03 18:07:45 crc kubenswrapper[4687]: I1203 18:07:45.022349 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" podStartSLOduration=1.511159421 podStartE2EDuration="2.022329531s" podCreationTimestamp="2025-12-03 18:07:43 +0000 UTC" firstStartedPulling="2025-12-03 18:07:43.9312167 +0000 UTC m=+1696.821912133" lastFinishedPulling="2025-12-03 18:07:44.44238677 +0000 UTC m=+1697.333082243" observedRunningTime="2025-12-03 18:07:45.013864412 +0000 UTC m=+1697.904559855" watchObservedRunningTime="2025-12-03 18:07:45.022329531 +0000 UTC m=+1697.913024974" Dec 03 18:07:56 crc kubenswrapper[4687]: I1203 18:07:56.407436 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:07:56 crc kubenswrapper[4687]: E1203 18:07:56.408480 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:08:11 crc kubenswrapper[4687]: I1203 18:08:11.407859 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:08:11 crc kubenswrapper[4687]: E1203 18:08:11.408684 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:08:16 crc kubenswrapper[4687]: I1203 18:08:16.055602 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bb0e-account-create-update-qmlsr"] Dec 03 18:08:16 crc kubenswrapper[4687]: I1203 18:08:16.071441 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nrxds"] Dec 03 18:08:16 crc kubenswrapper[4687]: I1203 18:08:16.083590 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nrxds"] Dec 03 18:08:16 crc kubenswrapper[4687]: I1203 18:08:16.096021 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bb0e-account-create-update-qmlsr"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.056068 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-cd6rr"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.073039 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3b20-account-create-update-qfsqf"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.080673 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-cd6rr"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.087930 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8805-account-create-update-5csps"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.095170 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3b20-account-create-update-qfsqf"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.102630 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8805-account-create-update-5csps"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.110035 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wtd6t"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.117721 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wtd6t"] Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.422038 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af7e623-619c-4f5a-87ca-264e29b9043c" path="/var/lib/kubelet/pods/2af7e623-619c-4f5a-87ca-264e29b9043c/volumes" Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.423684 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355d39b5-af63-4905-909a-5ac6168fd205" path="/var/lib/kubelet/pods/355d39b5-af63-4905-909a-5ac6168fd205/volumes" Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.424422 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4349de4c-40de-4644-8653-bdd32bf7f9fa" path="/var/lib/kubelet/pods/4349de4c-40de-4644-8653-bdd32bf7f9fa/volumes" Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.425016 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75727b99-ff6a-438e-9cff-479f1b1331e3" path="/var/lib/kubelet/pods/75727b99-ff6a-438e-9cff-479f1b1331e3/volumes" Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.426315 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3a4272-20d4-4e23-99b3-5d8d3a729f16" path="/var/lib/kubelet/pods/cd3a4272-20d4-4e23-99b3-5d8d3a729f16/volumes" Dec 03 18:08:17 crc kubenswrapper[4687]: I1203 18:08:17.426894 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16caedb-6156-48c7-b43f-b012eb8f49ab" path="/var/lib/kubelet/pods/d16caedb-6156-48c7-b43f-b012eb8f49ab/volumes" Dec 03 18:08:24 crc kubenswrapper[4687]: I1203 18:08:24.408097 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:08:24 crc kubenswrapper[4687]: E1203 18:08:24.409115 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:08:36 crc kubenswrapper[4687]: I1203 18:08:36.408434 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:08:36 crc kubenswrapper[4687]: E1203 18:08:36.409217 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.113251 4687 scope.go:117] "RemoveContainer" containerID="993448c13f795ac7e1e3bf969a8213bafdaf8f829d44a8b9cf9738999fa20996" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.174319 4687 scope.go:117] "RemoveContainer" containerID="79f628c92d2f9b7861db3650f2be94626edb0b8a930f87b386e1048d7ec5c209" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.203765 4687 scope.go:117] "RemoveContainer" containerID="4ebec572bef66c102c90eed60bb105e37e4e9fd63725aee7d8b9a0bc3a94173f" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.243430 4687 scope.go:117] "RemoveContainer" containerID="37d302a38c36526d6c452de73aefbdad78287c6af21349121d021107fc3bc5a9" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.293730 4687 scope.go:117] "RemoveContainer" containerID="54dc7d0c794505c600e646df7438d2ae3659cca109a5bddcc51466e30b8f7ef3" Dec 03 18:08:39 crc kubenswrapper[4687]: I1203 18:08:39.351701 4687 scope.go:117] "RemoveContainer" containerID="9f0db8844203c010e2ab55d844400ae5d870ddd94854f3f0f5970da104c655ae" Dec 03 18:08:42 crc kubenswrapper[4687]: I1203 18:08:42.052011 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6d87h"] Dec 03 18:08:42 crc kubenswrapper[4687]: I1203 18:08:42.064423 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6d87h"] Dec 03 18:08:43 crc kubenswrapper[4687]: I1203 18:08:43.416184 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810e9e01-af1f-4a88-8858-76fc200db914" path="/var/lib/kubelet/pods/810e9e01-af1f-4a88-8858-76fc200db914/volumes" Dec 03 18:08:47 crc kubenswrapper[4687]: I1203 18:08:47.417729 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:08:47 crc kubenswrapper[4687]: E1203 18:08:47.418917 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.058668 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3681-account-create-update-8hzv5"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.070968 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-g957f"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.089655 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c684-account-create-update-2v9t5"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.097640 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zsnhj"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.104966 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2fnzt"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.113460 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-g957f"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.120560 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zsnhj"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.127623 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3681-account-create-update-8hzv5"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.134351 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2fnzt"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.140962 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c684-account-create-update-2v9t5"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.148602 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-111f-account-create-update-mqbbx"] Dec 03 18:08:54 crc kubenswrapper[4687]: I1203 18:08:54.156215 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-111f-account-create-update-mqbbx"] Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.432301 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b39c370-c8bc-4811-a7d3-75e3dd59450c" path="/var/lib/kubelet/pods/0b39c370-c8bc-4811-a7d3-75e3dd59450c/volumes" Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.433106 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae1bbd2-1eaf-4869-b833-8ca42a487ba9" path="/var/lib/kubelet/pods/1ae1bbd2-1eaf-4869-b833-8ca42a487ba9/volumes" Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.434164 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19b1f86-c351-48d8-b165-177ff9d25d76" path="/var/lib/kubelet/pods/b19b1f86-c351-48d8-b165-177ff9d25d76/volumes" Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.434908 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3667af9-5425-4ca3-b700-48fdc547de52" path="/var/lib/kubelet/pods/d3667af9-5425-4ca3-b700-48fdc547de52/volumes" Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.435891 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d935fb22-7243-4c51-a92c-59e917358f4e" path="/var/lib/kubelet/pods/d935fb22-7243-4c51-a92c-59e917358f4e/volumes" Dec 03 18:08:55 crc kubenswrapper[4687]: I1203 18:08:55.436462 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9" path="/var/lib/kubelet/pods/fa9d192d-f4d8-4b1e-b32e-f4b9de7416e9/volumes" Dec 03 18:09:01 crc kubenswrapper[4687]: I1203 18:09:01.408301 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:09:01 crc kubenswrapper[4687]: E1203 18:09:01.408814 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:09:04 crc kubenswrapper[4687]: I1203 18:09:04.040414 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fl77x"] Dec 03 18:09:04 crc kubenswrapper[4687]: I1203 18:09:04.056050 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fl77x"] Dec 03 18:09:05 crc kubenswrapper[4687]: I1203 18:09:05.418370 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35a8832-c4a9-4d5a-8612-d870bcf6fa4c" path="/var/lib/kubelet/pods/d35a8832-c4a9-4d5a-8612-d870bcf6fa4c/volumes" Dec 03 18:09:12 crc kubenswrapper[4687]: I1203 18:09:12.408079 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:09:12 crc kubenswrapper[4687]: E1203 18:09:12.409412 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:09:23 crc kubenswrapper[4687]: I1203 18:09:23.408527 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:09:23 crc kubenswrapper[4687]: I1203 18:09:23.961775 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd"} Dec 03 18:09:30 crc kubenswrapper[4687]: I1203 18:09:30.040513 4687 generic.go:334] "Generic (PLEG): container finished" podID="283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" containerID="9645cfdb9bb87b6799dab5feb0650a94bd9e637c12bda4dd7d8f568c515fc37d" exitCode=0 Dec 03 18:09:30 crc kubenswrapper[4687]: I1203 18:09:30.040672 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" event={"ID":"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f","Type":"ContainerDied","Data":"9645cfdb9bb87b6799dab5feb0650a94bd9e637c12bda4dd7d8f568c515fc37d"} Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.428926 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.476898 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key\") pod \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.476985 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory\") pod \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.477039 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q44kb\" (UniqueName: \"kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb\") pod \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\" (UID: \"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f\") " Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.483549 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb" (OuterVolumeSpecName: "kube-api-access-q44kb") pod "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" (UID: "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f"). InnerVolumeSpecName "kube-api-access-q44kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.504737 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory" (OuterVolumeSpecName: "inventory") pod "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" (UID: "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.505206 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" (UID: "283f8d5d-eee3-4591-b0d2-65c3cc8fa78f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.579588 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.579629 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q44kb\" (UniqueName: \"kubernetes.io/projected/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-kube-api-access-q44kb\") on node \"crc\" DevicePath \"\"" Dec 03 18:09:31 crc kubenswrapper[4687]: I1203 18:09:31.579642 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/283f8d5d-eee3-4591-b0d2-65c3cc8fa78f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.065578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" event={"ID":"283f8d5d-eee3-4591-b0d2-65c3cc8fa78f","Type":"ContainerDied","Data":"371b0f69b2d481a4e58de37f3fe875cb29bbe2cb0b55ea0e3c2348540dc0b4d8"} Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.065617 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="371b0f69b2d481a4e58de37f3fe875cb29bbe2cb0b55ea0e3c2348540dc0b4d8" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.065623 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-88lsw" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.147806 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn"] Dec 03 18:09:32 crc kubenswrapper[4687]: E1203 18:09:32.148240 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.148442 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.148679 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="283f8d5d-eee3-4591-b0d2-65c3cc8fa78f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.149497 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.151911 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.152079 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.152375 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.153439 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.159449 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn"] Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.202215 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.202416 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.202689 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppwpw\" (UniqueName: \"kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.305522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppwpw\" (UniqueName: \"kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.308430 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.308571 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.312747 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.312790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.323133 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppwpw\" (UniqueName: \"kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.471695 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.995510 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn"] Dec 03 18:09:32 crc kubenswrapper[4687]: I1203 18:09:32.998780 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:09:33 crc kubenswrapper[4687]: I1203 18:09:33.075334 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" event={"ID":"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8","Type":"ContainerStarted","Data":"aac0effeb1fb9f7f3c70cdbe13b1647b19eaf4c197a41ad1777a885d71b9f28d"} Dec 03 18:09:34 crc kubenswrapper[4687]: I1203 18:09:34.084103 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" event={"ID":"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8","Type":"ContainerStarted","Data":"34d3c209b9ed47b62633beaaaecd16803a5ac75fee56b0ba0ea4b3eed349cf45"} Dec 03 18:09:34 crc kubenswrapper[4687]: I1203 18:09:34.104871 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" podStartSLOduration=1.710453191 podStartE2EDuration="2.104839079s" podCreationTimestamp="2025-12-03 18:09:32 +0000 UTC" firstStartedPulling="2025-12-03 18:09:32.998473745 +0000 UTC m=+1805.889169178" lastFinishedPulling="2025-12-03 18:09:33.392859633 +0000 UTC m=+1806.283555066" observedRunningTime="2025-12-03 18:09:34.101311704 +0000 UTC m=+1806.992007207" watchObservedRunningTime="2025-12-03 18:09:34.104839079 +0000 UTC m=+1806.995534542" Dec 03 18:09:35 crc kubenswrapper[4687]: I1203 18:09:35.055786 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hsnj2"] Dec 03 18:09:35 crc kubenswrapper[4687]: I1203 18:09:35.065068 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hsnj2"] Dec 03 18:09:35 crc kubenswrapper[4687]: I1203 18:09:35.419938 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c9985b-e915-4819-8355-af9e8076f50a" path="/var/lib/kubelet/pods/80c9985b-e915-4819-8355-af9e8076f50a/volumes" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.481941 4687 scope.go:117] "RemoveContainer" containerID="106e9bb9cdb9821d16e06ec4496198c7af1cafae68082b7984488bd0dcfa0d9c" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.520385 4687 scope.go:117] "RemoveContainer" containerID="4a73d9726774ae32f7bfa173c0dd77c716548c81a9707a19dc5f4ba5b0ab16d4" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.553895 4687 scope.go:117] "RemoveContainer" containerID="a2e5afcb517975022f4345953179db1af79df4b766a88d8243433f9c08b555b0" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.608467 4687 scope.go:117] "RemoveContainer" containerID="9cbca4b6569ad41bd32ceecc7c6d2f12a0864cba2af434b62e8d42386e06c1c1" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.651933 4687 scope.go:117] "RemoveContainer" containerID="1695b7e85825ae74de33fb3ae91d389955636cb5993474e014bb39a4e26608ad" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.696661 4687 scope.go:117] "RemoveContainer" containerID="13106156a8a727d0e7eb3d857d8c0a5162f994b405c18ba1a79eb75addab621c" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.759455 4687 scope.go:117] "RemoveContainer" containerID="6c7e1a12bcc10974c23a95be6f4db6fb13ffb0c1b4ff38d9fc0fa8a06790252b" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.801585 4687 scope.go:117] "RemoveContainer" containerID="95032731d7b609c8404448f438c16eb16e2b95f9287b95df81b3891876a756c3" Dec 03 18:09:39 crc kubenswrapper[4687]: I1203 18:09:39.820158 4687 scope.go:117] "RemoveContainer" containerID="d32935b35304743f9d2b35aade9836515985de1aee239a4e4ee64a0268f30f41" Dec 03 18:09:46 crc kubenswrapper[4687]: I1203 18:09:46.028313 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m67k4"] Dec 03 18:09:46 crc kubenswrapper[4687]: I1203 18:09:46.040059 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m67k4"] Dec 03 18:09:47 crc kubenswrapper[4687]: I1203 18:09:47.427596 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a59cd24e-e105-48b8-a084-909b0dca97c0" path="/var/lib/kubelet/pods/a59cd24e-e105-48b8-a084-909b0dca97c0/volumes" Dec 03 18:09:50 crc kubenswrapper[4687]: I1203 18:09:50.030540 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-6tmxl"] Dec 03 18:09:50 crc kubenswrapper[4687]: I1203 18:09:50.041372 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-6tmxl"] Dec 03 18:09:50 crc kubenswrapper[4687]: I1203 18:09:50.048844 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2flgf"] Dec 03 18:09:50 crc kubenswrapper[4687]: I1203 18:09:50.062923 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2flgf"] Dec 03 18:09:51 crc kubenswrapper[4687]: I1203 18:09:51.422371 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23273387-49bc-4a7e-b07a-5695d947eda9" path="/var/lib/kubelet/pods/23273387-49bc-4a7e-b07a-5695d947eda9/volumes" Dec 03 18:09:51 crc kubenswrapper[4687]: I1203 18:09:51.423668 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34993b1-3135-46ef-9f85-9ab7525b1682" path="/var/lib/kubelet/pods/f34993b1-3135-46ef-9f85-9ab7525b1682/volumes" Dec 03 18:10:03 crc kubenswrapper[4687]: I1203 18:10:03.044749 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-schhv"] Dec 03 18:10:03 crc kubenswrapper[4687]: I1203 18:10:03.053439 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-schhv"] Dec 03 18:10:03 crc kubenswrapper[4687]: I1203 18:10:03.419608 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67159b4a-2e66-424e-9e93-4863da0f5b56" path="/var/lib/kubelet/pods/67159b4a-2e66-424e-9e93-4863da0f5b56/volumes" Dec 03 18:10:40 crc kubenswrapper[4687]: I1203 18:10:40.001049 4687 scope.go:117] "RemoveContainer" containerID="c7ed67c3f470f06a2c5856e8532edb6094d3f342d491f66de49765a3ffba4968" Dec 03 18:10:40 crc kubenswrapper[4687]: I1203 18:10:40.051967 4687 scope.go:117] "RemoveContainer" containerID="3ac4ddadf524375d31ce336a55d2747fa18a9eddd54fe44fe7ba1304ed0a1919" Dec 03 18:10:40 crc kubenswrapper[4687]: I1203 18:10:40.094254 4687 scope.go:117] "RemoveContainer" containerID="0baf87078618cb287341d80a79f9ef3afd310e55c86d2b50d7d1c1e383aa87d6" Dec 03 18:10:40 crc kubenswrapper[4687]: I1203 18:10:40.161037 4687 scope.go:117] "RemoveContainer" containerID="bcdd0f4ca1412d82ddf81ab61553982e5cefa09d6e8092776bc27707f3de2715" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.073834 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jtncf"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.088177 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0b6f-account-create-update-fk9cq"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.110300 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-51b4-account-create-update-8jd4x"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.122221 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2750-account-create-update-rhllc"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.133069 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7j48z"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.139718 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-jnf9v"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.149269 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2750-account-create-update-rhllc"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.156434 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0b6f-account-create-update-fk9cq"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.163696 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7j48z"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.173792 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-jnf9v"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.183428 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jtncf"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.217554 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-51b4-account-create-update-8jd4x"] Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.421710 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b03712-0693-4868-844b-2238f9703459" path="/var/lib/kubelet/pods/08b03712-0693-4868-844b-2238f9703459/volumes" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.422678 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b" path="/var/lib/kubelet/pods/1fd2f8f7-c98d-48a4-96eb-e3df12a4da8b/volumes" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.423489 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9190e920-62f7-4123-925b-f7d47371df49" path="/var/lib/kubelet/pods/9190e920-62f7-4123-925b-f7d47371df49/volumes" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.424288 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa60c9ab-b67f-4480-8bf3-7027c68166c5" path="/var/lib/kubelet/pods/aa60c9ab-b67f-4480-8bf3-7027c68166c5/volumes" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.425624 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6c641d-c691-45d3-8549-25373fef300c" path="/var/lib/kubelet/pods/dd6c641d-c691-45d3-8549-25373fef300c/volumes" Dec 03 18:10:41 crc kubenswrapper[4687]: I1203 18:10:41.426312 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b7a829-70bb-4e5d-9f72-3f2cf68563fb" path="/var/lib/kubelet/pods/e6b7a829-70bb-4e5d-9f72-3f2cf68563fb/volumes" Dec 03 18:10:49 crc kubenswrapper[4687]: I1203 18:10:49.815884 4687 generic.go:334] "Generic (PLEG): container finished" podID="bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" containerID="34d3c209b9ed47b62633beaaaecd16803a5ac75fee56b0ba0ea4b3eed349cf45" exitCode=0 Dec 03 18:10:49 crc kubenswrapper[4687]: I1203 18:10:49.815977 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" event={"ID":"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8","Type":"ContainerDied","Data":"34d3c209b9ed47b62633beaaaecd16803a5ac75fee56b0ba0ea4b3eed349cf45"} Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.207390 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.282813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key\") pod \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.282925 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppwpw\" (UniqueName: \"kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw\") pod \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.283231 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory\") pod \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\" (UID: \"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8\") " Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.288638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw" (OuterVolumeSpecName: "kube-api-access-ppwpw") pod "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" (UID: "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8"). InnerVolumeSpecName "kube-api-access-ppwpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.313417 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" (UID: "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.322086 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory" (OuterVolumeSpecName: "inventory") pod "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" (UID: "bd21b7de-e79a-45b6-a3ea-9fb73f55fea8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.386434 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.386474 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.386512 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppwpw\" (UniqueName: \"kubernetes.io/projected/bd21b7de-e79a-45b6-a3ea-9fb73f55fea8-kube-api-access-ppwpw\") on node \"crc\" DevicePath \"\"" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.837196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" event={"ID":"bd21b7de-e79a-45b6-a3ea-9fb73f55fea8","Type":"ContainerDied","Data":"aac0effeb1fb9f7f3c70cdbe13b1647b19eaf4c197a41ad1777a885d71b9f28d"} Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.837237 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.837247 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac0effeb1fb9f7f3c70cdbe13b1647b19eaf4c197a41ad1777a885d71b9f28d" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.919521 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq"] Dec 03 18:10:51 crc kubenswrapper[4687]: E1203 18:10:51.919925 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.919944 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.920155 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd21b7de-e79a-45b6-a3ea-9fb73f55fea8" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.920859 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.924559 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.924760 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.924891 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.926031 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:10:51 crc kubenswrapper[4687]: I1203 18:10:51.945074 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq"] Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.000988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmbdj\" (UniqueName: \"kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.001052 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.001094 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.102640 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmbdj\" (UniqueName: \"kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.102991 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.103023 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.107146 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.107386 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.135605 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmbdj\" (UniqueName: \"kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ztppq\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.246791 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.765451 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq"] Dec 03 18:10:52 crc kubenswrapper[4687]: I1203 18:10:52.849223 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" event={"ID":"6c19f653-0ec6-4a75-a396-dacbe41c2c2e","Type":"ContainerStarted","Data":"d286f5200c46812bef0223034e1a8f8aa696bdc63efc88ee8bac743e2da0e4c8"} Dec 03 18:10:53 crc kubenswrapper[4687]: I1203 18:10:53.863488 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" event={"ID":"6c19f653-0ec6-4a75-a396-dacbe41c2c2e","Type":"ContainerStarted","Data":"9db838244b70a019b63ee1f645a36ad515129999635bc92a1b01c7d2d023a115"} Dec 03 18:10:53 crc kubenswrapper[4687]: I1203 18:10:53.887270 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" podStartSLOduration=2.20166159 podStartE2EDuration="2.887249558s" podCreationTimestamp="2025-12-03 18:10:51 +0000 UTC" firstStartedPulling="2025-12-03 18:10:52.773185388 +0000 UTC m=+1885.663880821" lastFinishedPulling="2025-12-03 18:10:53.458773346 +0000 UTC m=+1886.349468789" observedRunningTime="2025-12-03 18:10:53.884068373 +0000 UTC m=+1886.774763806" watchObservedRunningTime="2025-12-03 18:10:53.887249558 +0000 UTC m=+1886.777944991" Dec 03 18:10:58 crc kubenswrapper[4687]: I1203 18:10:58.926036 4687 generic.go:334] "Generic (PLEG): container finished" podID="6c19f653-0ec6-4a75-a396-dacbe41c2c2e" containerID="9db838244b70a019b63ee1f645a36ad515129999635bc92a1b01c7d2d023a115" exitCode=0 Dec 03 18:10:58 crc kubenswrapper[4687]: I1203 18:10:58.926146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" event={"ID":"6c19f653-0ec6-4a75-a396-dacbe41c2c2e","Type":"ContainerDied","Data":"9db838244b70a019b63ee1f645a36ad515129999635bc92a1b01c7d2d023a115"} Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.380001 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.465153 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory\") pod \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.465324 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmbdj\" (UniqueName: \"kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj\") pod \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.465414 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key\") pod \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\" (UID: \"6c19f653-0ec6-4a75-a396-dacbe41c2c2e\") " Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.472226 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj" (OuterVolumeSpecName: "kube-api-access-zmbdj") pod "6c19f653-0ec6-4a75-a396-dacbe41c2c2e" (UID: "6c19f653-0ec6-4a75-a396-dacbe41c2c2e"). InnerVolumeSpecName "kube-api-access-zmbdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.498971 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c19f653-0ec6-4a75-a396-dacbe41c2c2e" (UID: "6c19f653-0ec6-4a75-a396-dacbe41c2c2e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.501116 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory" (OuterVolumeSpecName: "inventory") pod "6c19f653-0ec6-4a75-a396-dacbe41c2c2e" (UID: "6c19f653-0ec6-4a75-a396-dacbe41c2c2e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.568535 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.568578 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.568591 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmbdj\" (UniqueName: \"kubernetes.io/projected/6c19f653-0ec6-4a75-a396-dacbe41c2c2e-kube-api-access-zmbdj\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.941565 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" event={"ID":"6c19f653-0ec6-4a75-a396-dacbe41c2c2e","Type":"ContainerDied","Data":"d286f5200c46812bef0223034e1a8f8aa696bdc63efc88ee8bac743e2da0e4c8"} Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.941845 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d286f5200c46812bef0223034e1a8f8aa696bdc63efc88ee8bac743e2da0e4c8" Dec 03 18:11:00 crc kubenswrapper[4687]: I1203 18:11:00.941596 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ztppq" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.014863 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx"] Dec 03 18:11:01 crc kubenswrapper[4687]: E1203 18:11:01.015348 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c19f653-0ec6-4a75-a396-dacbe41c2c2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.015375 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c19f653-0ec6-4a75-a396-dacbe41c2c2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.015690 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c19f653-0ec6-4a75-a396-dacbe41c2c2e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.016485 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.019836 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.020152 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.020156 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.020384 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.024354 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx"] Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.178412 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.178768 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.178928 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsdc\" (UniqueName: \"kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.280273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.280353 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.280420 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvsdc\" (UniqueName: \"kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.285834 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.286008 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.296692 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvsdc\" (UniqueName: \"kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-89dpx\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.336735 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.859962 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx"] Dec 03 18:11:01 crc kubenswrapper[4687]: W1203 18:11:01.865368 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c34c4b_6990_485c_91b7_c07c7191c398.slice/crio-b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7 WatchSource:0}: Error finding container b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7: Status 404 returned error can't find the container with id b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7 Dec 03 18:11:01 crc kubenswrapper[4687]: I1203 18:11:01.950704 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" event={"ID":"a9c34c4b-6990-485c-91b7-c07c7191c398","Type":"ContainerStarted","Data":"b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7"} Dec 03 18:11:02 crc kubenswrapper[4687]: I1203 18:11:02.959910 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" event={"ID":"a9c34c4b-6990-485c-91b7-c07c7191c398","Type":"ContainerStarted","Data":"1e53ff165ddd9c42fddb1abb312423c2f18567dd02ed085d062c9e467df332a6"} Dec 03 18:11:02 crc kubenswrapper[4687]: I1203 18:11:02.979803 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" podStartSLOduration=2.531750233 podStartE2EDuration="2.979782612s" podCreationTimestamp="2025-12-03 18:11:00 +0000 UTC" firstStartedPulling="2025-12-03 18:11:01.868688672 +0000 UTC m=+1894.759384145" lastFinishedPulling="2025-12-03 18:11:02.316721091 +0000 UTC m=+1895.207416524" observedRunningTime="2025-12-03 18:11:02.974037897 +0000 UTC m=+1895.864733340" watchObservedRunningTime="2025-12-03 18:11:02.979782612 +0000 UTC m=+1895.870478045" Dec 03 18:11:10 crc kubenswrapper[4687]: I1203 18:11:10.046016 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4npbh"] Dec 03 18:11:10 crc kubenswrapper[4687]: I1203 18:11:10.054143 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4npbh"] Dec 03 18:11:11 crc kubenswrapper[4687]: I1203 18:11:11.422636 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e" path="/var/lib/kubelet/pods/e8175e5f-f7ce-4c39-98ff-d7b2ff9a0c9e/volumes" Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.209914 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bttct"] Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.218162 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rz5q5"] Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.228175 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bttct"] Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.236238 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rz5q5"] Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.437424 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512fe776-5298-42ac-b760-682e3b0d99e5" path="/var/lib/kubelet/pods/512fe776-5298-42ac-b760-682e3b0d99e5/volumes" Dec 03 18:11:33 crc kubenswrapper[4687]: I1203 18:11:33.438992 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="920884a6-a7b0-49c6-abe7-2b9a9f8b9835" path="/var/lib/kubelet/pods/920884a6-a7b0-49c6-abe7-2b9a9f8b9835/volumes" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.286563 4687 scope.go:117] "RemoveContainer" containerID="16cc30c979a0d4dbf89d5ca9aa632da49f6c7654030b69e847d08378a9b20aa9" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.308250 4687 scope.go:117] "RemoveContainer" containerID="2fa61684b1b57e8e27b69aaa3cc1f4fc3dd878c8650cbd7c2e098398551bbacf" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.366085 4687 scope.go:117] "RemoveContainer" containerID="598a3a2004c1cd6f0544d541eb18085eab1d8f72f01283a806f2b0ae29d26b6a" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.411844 4687 scope.go:117] "RemoveContainer" containerID="e96d3423b0c75c64adf41de778f58b1a0d4db53f25c2268ae661428cab33ecec" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.470202 4687 scope.go:117] "RemoveContainer" containerID="1b6988722fd79c441f7cdfbf33e17fcfe74a4f5f1351124f554fd4be6476708b" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.526445 4687 scope.go:117] "RemoveContainer" containerID="26c75ed11cfbece24255263cff21a228a477e7b19947f19534c75b814e778778" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.588804 4687 scope.go:117] "RemoveContainer" containerID="4a98f67f8ed74b45e48dbb6a8dcd4d35c322e73bb89821943833a348f1515dac" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.614403 4687 scope.go:117] "RemoveContainer" containerID="3d914089314d14fa717b485a4a023ea1bc178b893de4981c8b987644fd091245" Dec 03 18:11:40 crc kubenswrapper[4687]: I1203 18:11:40.656298 4687 scope.go:117] "RemoveContainer" containerID="836b2b57a9c16d200864574b58edd4b9eb0e851e37ffad6192b2db07fd64c4de" Dec 03 18:11:43 crc kubenswrapper[4687]: I1203 18:11:43.319381 4687 generic.go:334] "Generic (PLEG): container finished" podID="a9c34c4b-6990-485c-91b7-c07c7191c398" containerID="1e53ff165ddd9c42fddb1abb312423c2f18567dd02ed085d062c9e467df332a6" exitCode=0 Dec 03 18:11:43 crc kubenswrapper[4687]: I1203 18:11:43.319472 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" event={"ID":"a9c34c4b-6990-485c-91b7-c07c7191c398","Type":"ContainerDied","Data":"1e53ff165ddd9c42fddb1abb312423c2f18567dd02ed085d062c9e467df332a6"} Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.111697 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.112298 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.835529 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.939027 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key\") pod \"a9c34c4b-6990-485c-91b7-c07c7191c398\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.939310 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvsdc\" (UniqueName: \"kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc\") pod \"a9c34c4b-6990-485c-91b7-c07c7191c398\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.939381 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory\") pod \"a9c34c4b-6990-485c-91b7-c07c7191c398\" (UID: \"a9c34c4b-6990-485c-91b7-c07c7191c398\") " Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.959442 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc" (OuterVolumeSpecName: "kube-api-access-pvsdc") pod "a9c34c4b-6990-485c-91b7-c07c7191c398" (UID: "a9c34c4b-6990-485c-91b7-c07c7191c398"). InnerVolumeSpecName "kube-api-access-pvsdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.972814 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory" (OuterVolumeSpecName: "inventory") pod "a9c34c4b-6990-485c-91b7-c07c7191c398" (UID: "a9c34c4b-6990-485c-91b7-c07c7191c398"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:11:44 crc kubenswrapper[4687]: I1203 18:11:44.976637 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9c34c4b-6990-485c-91b7-c07c7191c398" (UID: "a9c34c4b-6990-485c-91b7-c07c7191c398"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.042006 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvsdc\" (UniqueName: \"kubernetes.io/projected/a9c34c4b-6990-485c-91b7-c07c7191c398-kube-api-access-pvsdc\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.042043 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.042054 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c34c4b-6990-485c-91b7-c07c7191c398-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.344633 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" event={"ID":"a9c34c4b-6990-485c-91b7-c07c7191c398","Type":"ContainerDied","Data":"b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7"} Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.344675 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2ccbb91523062933cbfcc7180279287e53f6db3f61b23bfe07e84c59bf522d7" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.344691 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-89dpx" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.443308 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk"] Dec 03 18:11:45 crc kubenswrapper[4687]: E1203 18:11:45.443837 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c34c4b-6990-485c-91b7-c07c7191c398" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.443862 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c34c4b-6990-485c-91b7-c07c7191c398" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.444135 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c34c4b-6990-485c-91b7-c07c7191c398" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.444955 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.447954 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.448212 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.448056 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.448430 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.470157 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk"] Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.550172 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8fj9\" (UniqueName: \"kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.550281 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.550304 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.652373 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8fj9\" (UniqueName: \"kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.652477 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.652501 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.656341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.658603 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.671925 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8fj9\" (UniqueName: \"kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:45 crc kubenswrapper[4687]: I1203 18:11:45.761846 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:11:46 crc kubenswrapper[4687]: I1203 18:11:46.330492 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk"] Dec 03 18:11:46 crc kubenswrapper[4687]: W1203 18:11:46.348829 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd79dbe03_ec71_4fc7_8237_b3094ecb81ca.slice/crio-27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56 WatchSource:0}: Error finding container 27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56: Status 404 returned error can't find the container with id 27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56 Dec 03 18:11:47 crc kubenswrapper[4687]: I1203 18:11:47.370764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" event={"ID":"d79dbe03-ec71-4fc7-8237-b3094ecb81ca","Type":"ContainerStarted","Data":"e7c3f9ed1d93073026be340da77401ff6e7b56b6a738aba6776aa510de5b8c67"} Dec 03 18:11:47 crc kubenswrapper[4687]: I1203 18:11:47.371270 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" event={"ID":"d79dbe03-ec71-4fc7-8237-b3094ecb81ca","Type":"ContainerStarted","Data":"27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56"} Dec 03 18:11:47 crc kubenswrapper[4687]: I1203 18:11:47.402251 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" podStartSLOduration=1.994685824 podStartE2EDuration="2.402213861s" podCreationTimestamp="2025-12-03 18:11:45 +0000 UTC" firstStartedPulling="2025-12-03 18:11:46.350879182 +0000 UTC m=+1939.241574605" lastFinishedPulling="2025-12-03 18:11:46.758407199 +0000 UTC m=+1939.649102642" observedRunningTime="2025-12-03 18:11:47.394686109 +0000 UTC m=+1940.285381542" watchObservedRunningTime="2025-12-03 18:11:47.402213861 +0000 UTC m=+1940.292909304" Dec 03 18:12:14 crc kubenswrapper[4687]: I1203 18:12:14.111979 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:12:14 crc kubenswrapper[4687]: I1203 18:12:14.113181 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:12:17 crc kubenswrapper[4687]: I1203 18:12:17.040226 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z97pq"] Dec 03 18:12:17 crc kubenswrapper[4687]: I1203 18:12:17.047886 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z97pq"] Dec 03 18:12:17 crc kubenswrapper[4687]: I1203 18:12:17.419570 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2671f702-a121-43e8-be5d-b77f30d600b4" path="/var/lib/kubelet/pods/2671f702-a121-43e8-be5d-b77f30d600b4/volumes" Dec 03 18:12:40 crc kubenswrapper[4687]: I1203 18:12:40.854094 4687 scope.go:117] "RemoveContainer" containerID="6f8e3ab246b109f9380e049915ef90065d9264c2059d1e2f8c361ad7cd97211a" Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.111393 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.111914 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.111967 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.112753 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.112845 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd" gracePeriod=600 Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.944441 4687 generic.go:334] "Generic (PLEG): container finished" podID="d79dbe03-ec71-4fc7-8237-b3094ecb81ca" containerID="e7c3f9ed1d93073026be340da77401ff6e7b56b6a738aba6776aa510de5b8c67" exitCode=0 Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.944663 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" event={"ID":"d79dbe03-ec71-4fc7-8237-b3094ecb81ca","Type":"ContainerDied","Data":"e7c3f9ed1d93073026be340da77401ff6e7b56b6a738aba6776aa510de5b8c67"} Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.949791 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd" exitCode=0 Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.949833 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd"} Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.949858 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560"} Dec 03 18:12:44 crc kubenswrapper[4687]: I1203 18:12:44.949874 4687 scope.go:117] "RemoveContainer" containerID="ca9e224504b4bf4e666c982cfa6a84fcdaef3cf16bf88b4b1ad7ac475c973ee9" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.502604 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.663465 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8fj9\" (UniqueName: \"kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9\") pod \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.663564 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory\") pod \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.663783 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key\") pod \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\" (UID: \"d79dbe03-ec71-4fc7-8237-b3094ecb81ca\") " Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.670419 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9" (OuterVolumeSpecName: "kube-api-access-w8fj9") pod "d79dbe03-ec71-4fc7-8237-b3094ecb81ca" (UID: "d79dbe03-ec71-4fc7-8237-b3094ecb81ca"). InnerVolumeSpecName "kube-api-access-w8fj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.691229 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d79dbe03-ec71-4fc7-8237-b3094ecb81ca" (UID: "d79dbe03-ec71-4fc7-8237-b3094ecb81ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.699329 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory" (OuterVolumeSpecName: "inventory") pod "d79dbe03-ec71-4fc7-8237-b3094ecb81ca" (UID: "d79dbe03-ec71-4fc7-8237-b3094ecb81ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.766503 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8fj9\" (UniqueName: \"kubernetes.io/projected/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-kube-api-access-w8fj9\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.766571 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:46 crc kubenswrapper[4687]: I1203 18:12:46.766583 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d79dbe03-ec71-4fc7-8237-b3094ecb81ca-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.002257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" event={"ID":"d79dbe03-ec71-4fc7-8237-b3094ecb81ca","Type":"ContainerDied","Data":"27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56"} Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.002298 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27400275287d9549b323534f9e1f380cce31e09d93e19cd23031eec2cc0eba56" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.002306 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.072620 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ssfcd"] Dec 03 18:12:47 crc kubenswrapper[4687]: E1203 18:12:47.073060 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d79dbe03-ec71-4fc7-8237-b3094ecb81ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.073078 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d79dbe03-ec71-4fc7-8237-b3094ecb81ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.073300 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d79dbe03-ec71-4fc7-8237-b3094ecb81ca" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.073926 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.076333 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.076436 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.077117 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.077683 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.085264 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ssfcd"] Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.173540 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.173805 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.174193 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bfsb\" (UniqueName: \"kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.277027 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bfsb\" (UniqueName: \"kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.277214 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.277303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.281831 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.282289 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.302067 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bfsb\" (UniqueName: \"kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb\") pod \"ssh-known-hosts-edpm-deployment-ssfcd\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.419628 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:47 crc kubenswrapper[4687]: I1203 18:12:47.935702 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ssfcd"] Dec 03 18:12:48 crc kubenswrapper[4687]: I1203 18:12:48.010454 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" event={"ID":"cc101fd4-addb-4d63-b123-d0c54197956c","Type":"ContainerStarted","Data":"aca9416b4a5dc5732f29b1a34d42d4d388e0d21feff3b4e55e631559131440d1"} Dec 03 18:12:49 crc kubenswrapper[4687]: I1203 18:12:49.020554 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" event={"ID":"cc101fd4-addb-4d63-b123-d0c54197956c","Type":"ContainerStarted","Data":"2f5178ee6e9cb1d51f3d409f1d9096b671a1c2434a37c1ebda0c2b6f66dc6197"} Dec 03 18:12:49 crc kubenswrapper[4687]: I1203 18:12:49.037683 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" podStartSLOduration=1.568975629 podStartE2EDuration="2.037661874s" podCreationTimestamp="2025-12-03 18:12:47 +0000 UTC" firstStartedPulling="2025-12-03 18:12:47.938951298 +0000 UTC m=+2000.829646731" lastFinishedPulling="2025-12-03 18:12:48.407637543 +0000 UTC m=+2001.298332976" observedRunningTime="2025-12-03 18:12:49.033373169 +0000 UTC m=+2001.924068622" watchObservedRunningTime="2025-12-03 18:12:49.037661874 +0000 UTC m=+2001.928357317" Dec 03 18:12:56 crc kubenswrapper[4687]: I1203 18:12:56.090352 4687 generic.go:334] "Generic (PLEG): container finished" podID="cc101fd4-addb-4d63-b123-d0c54197956c" containerID="2f5178ee6e9cb1d51f3d409f1d9096b671a1c2434a37c1ebda0c2b6f66dc6197" exitCode=0 Dec 03 18:12:56 crc kubenswrapper[4687]: I1203 18:12:56.090930 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" event={"ID":"cc101fd4-addb-4d63-b123-d0c54197956c","Type":"ContainerDied","Data":"2f5178ee6e9cb1d51f3d409f1d9096b671a1c2434a37c1ebda0c2b6f66dc6197"} Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.596083 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.701714 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bfsb\" (UniqueName: \"kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb\") pod \"cc101fd4-addb-4d63-b123-d0c54197956c\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.701854 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0\") pod \"cc101fd4-addb-4d63-b123-d0c54197956c\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.702084 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam\") pod \"cc101fd4-addb-4d63-b123-d0c54197956c\" (UID: \"cc101fd4-addb-4d63-b123-d0c54197956c\") " Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.715071 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb" (OuterVolumeSpecName: "kube-api-access-9bfsb") pod "cc101fd4-addb-4d63-b123-d0c54197956c" (UID: "cc101fd4-addb-4d63-b123-d0c54197956c"). InnerVolumeSpecName "kube-api-access-9bfsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.733558 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cc101fd4-addb-4d63-b123-d0c54197956c" (UID: "cc101fd4-addb-4d63-b123-d0c54197956c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.734059 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "cc101fd4-addb-4d63-b123-d0c54197956c" (UID: "cc101fd4-addb-4d63-b123-d0c54197956c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.805846 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.805905 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bfsb\" (UniqueName: \"kubernetes.io/projected/cc101fd4-addb-4d63-b123-d0c54197956c-kube-api-access-9bfsb\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:57 crc kubenswrapper[4687]: I1203 18:12:57.805925 4687 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/cc101fd4-addb-4d63-b123-d0c54197956c-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.111306 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" event={"ID":"cc101fd4-addb-4d63-b123-d0c54197956c","Type":"ContainerDied","Data":"aca9416b4a5dc5732f29b1a34d42d4d388e0d21feff3b4e55e631559131440d1"} Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.111349 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca9416b4a5dc5732f29b1a34d42d4d388e0d21feff3b4e55e631559131440d1" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.111352 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ssfcd" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.181324 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj"] Dec 03 18:12:58 crc kubenswrapper[4687]: E1203 18:12:58.181766 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc101fd4-addb-4d63-b123-d0c54197956c" containerName="ssh-known-hosts-edpm-deployment" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.181789 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc101fd4-addb-4d63-b123-d0c54197956c" containerName="ssh-known-hosts-edpm-deployment" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.182031 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc101fd4-addb-4d63-b123-d0c54197956c" containerName="ssh-known-hosts-edpm-deployment" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.182777 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.189666 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.191483 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.192351 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.192637 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.205896 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj"] Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.315108 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.315181 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.315223 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdw2j\" (UniqueName: \"kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.416504 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.416555 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.416594 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdw2j\" (UniqueName: \"kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.420578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.420581 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.436034 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdw2j\" (UniqueName: \"kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kcjpj\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.500947 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:12:58 crc kubenswrapper[4687]: I1203 18:12:58.988182 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj"] Dec 03 18:12:58 crc kubenswrapper[4687]: W1203 18:12:58.994098 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba0bb298_d1f4_478c_a663_9a8e20bfdcfd.slice/crio-5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142 WatchSource:0}: Error finding container 5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142: Status 404 returned error can't find the container with id 5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142 Dec 03 18:12:59 crc kubenswrapper[4687]: I1203 18:12:59.120668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" event={"ID":"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd","Type":"ContainerStarted","Data":"5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142"} Dec 03 18:13:00 crc kubenswrapper[4687]: I1203 18:13:00.130177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" event={"ID":"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd","Type":"ContainerStarted","Data":"0c9285724d5ad1b59ab1fb226f0187d34974710a4ca806e93a24c7b0373a4bbb"} Dec 03 18:13:00 crc kubenswrapper[4687]: I1203 18:13:00.147669 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" podStartSLOduration=1.638669937 podStartE2EDuration="2.147645695s" podCreationTimestamp="2025-12-03 18:12:58 +0000 UTC" firstStartedPulling="2025-12-03 18:12:58.995845897 +0000 UTC m=+2011.886541330" lastFinishedPulling="2025-12-03 18:12:59.504821655 +0000 UTC m=+2012.395517088" observedRunningTime="2025-12-03 18:13:00.143948685 +0000 UTC m=+2013.034644158" watchObservedRunningTime="2025-12-03 18:13:00.147645695 +0000 UTC m=+2013.038341128" Dec 03 18:13:09 crc kubenswrapper[4687]: I1203 18:13:09.205241 4687 generic.go:334] "Generic (PLEG): container finished" podID="ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" containerID="0c9285724d5ad1b59ab1fb226f0187d34974710a4ca806e93a24c7b0373a4bbb" exitCode=0 Dec 03 18:13:09 crc kubenswrapper[4687]: I1203 18:13:09.205481 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" event={"ID":"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd","Type":"ContainerDied","Data":"0c9285724d5ad1b59ab1fb226f0187d34974710a4ca806e93a24c7b0373a4bbb"} Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.675412 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.857597 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory\") pod \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.858052 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdw2j\" (UniqueName: \"kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j\") pod \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.858211 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key\") pod \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\" (UID: \"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd\") " Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.863149 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j" (OuterVolumeSpecName: "kube-api-access-bdw2j") pod "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" (UID: "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd"). InnerVolumeSpecName "kube-api-access-bdw2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.885895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory" (OuterVolumeSpecName: "inventory") pod "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" (UID: "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.887834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" (UID: "ba0bb298-d1f4-478c-a663-9a8e20bfdcfd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.960903 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.960944 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdw2j\" (UniqueName: \"kubernetes.io/projected/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-kube-api-access-bdw2j\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:10 crc kubenswrapper[4687]: I1203 18:13:10.960961 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ba0bb298-d1f4-478c-a663-9a8e20bfdcfd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.228288 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" event={"ID":"ba0bb298-d1f4-478c-a663-9a8e20bfdcfd","Type":"ContainerDied","Data":"5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142"} Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.228333 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6e59dc1e5607ff2b33084c481740816be697d19cc20395181595c281622142" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.228391 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kcjpj" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.318339 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h"] Dec 03 18:13:11 crc kubenswrapper[4687]: E1203 18:13:11.318807 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.318828 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.319026 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba0bb298-d1f4-478c-a663-9a8e20bfdcfd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.319818 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.322076 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.322391 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.323024 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.327940 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h"] Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.333167 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.471295 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.471431 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.471650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmn2j\" (UniqueName: \"kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.573585 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.573701 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.573759 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmn2j\" (UniqueName: \"kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.578962 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.579854 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.598583 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmn2j\" (UniqueName: \"kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:11 crc kubenswrapper[4687]: I1203 18:13:11.678866 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:12 crc kubenswrapper[4687]: I1203 18:13:12.242471 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h"] Dec 03 18:13:13 crc kubenswrapper[4687]: I1203 18:13:13.260196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" event={"ID":"b8e74449-f8e1-4cf8-8a93-e04ee18070e1","Type":"ContainerStarted","Data":"679163def55bc76b538c8c0dab414c27c98d4fa713e05a806c1b52324d9ed2d5"} Dec 03 18:13:13 crc kubenswrapper[4687]: I1203 18:13:13.260715 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" event={"ID":"b8e74449-f8e1-4cf8-8a93-e04ee18070e1","Type":"ContainerStarted","Data":"c8038725c8a6b158c670bcdb77bfe9fdc5c809fae8523432c827277bd412b1ca"} Dec 03 18:13:13 crc kubenswrapper[4687]: I1203 18:13:13.284776 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" podStartSLOduration=1.831047087 podStartE2EDuration="2.284748753s" podCreationTimestamp="2025-12-03 18:13:11 +0000 UTC" firstStartedPulling="2025-12-03 18:13:12.247778315 +0000 UTC m=+2025.138473748" lastFinishedPulling="2025-12-03 18:13:12.701479981 +0000 UTC m=+2025.592175414" observedRunningTime="2025-12-03 18:13:13.278960887 +0000 UTC m=+2026.169656320" watchObservedRunningTime="2025-12-03 18:13:13.284748753 +0000 UTC m=+2026.175444196" Dec 03 18:13:23 crc kubenswrapper[4687]: I1203 18:13:23.346608 4687 generic.go:334] "Generic (PLEG): container finished" podID="b8e74449-f8e1-4cf8-8a93-e04ee18070e1" containerID="679163def55bc76b538c8c0dab414c27c98d4fa713e05a806c1b52324d9ed2d5" exitCode=0 Dec 03 18:13:23 crc kubenswrapper[4687]: I1203 18:13:23.346708 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" event={"ID":"b8e74449-f8e1-4cf8-8a93-e04ee18070e1","Type":"ContainerDied","Data":"679163def55bc76b538c8c0dab414c27c98d4fa713e05a806c1b52324d9ed2d5"} Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.735872 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.832742 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory\") pod \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.832872 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key\") pod \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.832915 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmn2j\" (UniqueName: \"kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j\") pod \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\" (UID: \"b8e74449-f8e1-4cf8-8a93-e04ee18070e1\") " Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.838730 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j" (OuterVolumeSpecName: "kube-api-access-wmn2j") pod "b8e74449-f8e1-4cf8-8a93-e04ee18070e1" (UID: "b8e74449-f8e1-4cf8-8a93-e04ee18070e1"). InnerVolumeSpecName "kube-api-access-wmn2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.858932 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b8e74449-f8e1-4cf8-8a93-e04ee18070e1" (UID: "b8e74449-f8e1-4cf8-8a93-e04ee18070e1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.872293 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory" (OuterVolumeSpecName: "inventory") pod "b8e74449-f8e1-4cf8-8a93-e04ee18070e1" (UID: "b8e74449-f8e1-4cf8-8a93-e04ee18070e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.936445 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.936480 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmn2j\" (UniqueName: \"kubernetes.io/projected/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-kube-api-access-wmn2j\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:24 crc kubenswrapper[4687]: I1203 18:13:24.936494 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b8e74449-f8e1-4cf8-8a93-e04ee18070e1-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.364520 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" event={"ID":"b8e74449-f8e1-4cf8-8a93-e04ee18070e1","Type":"ContainerDied","Data":"c8038725c8a6b158c670bcdb77bfe9fdc5c809fae8523432c827277bd412b1ca"} Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.365102 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8038725c8a6b158c670bcdb77bfe9fdc5c809fae8523432c827277bd412b1ca" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.364617 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.473792 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls"] Dec 03 18:13:25 crc kubenswrapper[4687]: E1203 18:13:25.474331 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e74449-f8e1-4cf8-8a93-e04ee18070e1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.474351 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e74449-f8e1-4cf8-8a93-e04ee18070e1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.474665 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e74449-f8e1-4cf8-8a93-e04ee18070e1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.475575 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.480667 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.480833 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.480942 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.481044 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.481186 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.481290 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.481515 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.484241 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls"] Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.486953 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.552686 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.552776 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.552803 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.552863 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553000 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553032 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553087 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ct4x\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553158 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553391 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553437 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553478 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553568 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.553656 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656285 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656341 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656377 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656425 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ct4x\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656476 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656511 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656566 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656619 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656694 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656733 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656827 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.656909 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.657715 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.662329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.662903 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.662970 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.664200 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.664500 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.664792 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.665059 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.666219 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.666451 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.668288 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.668942 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.674009 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.678885 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.683401 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ct4x\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:25 crc kubenswrapper[4687]: I1203 18:13:25.797062 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:13:26 crc kubenswrapper[4687]: I1203 18:13:26.322328 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls"] Dec 03 18:13:26 crc kubenswrapper[4687]: I1203 18:13:26.375972 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" event={"ID":"ff93c8d7-1225-45d9-952c-f770d7ad7e33","Type":"ContainerStarted","Data":"f9e3e57213d4c7e85a52ddc4a23d2686f538ddef98dcba9913224826a0c2d523"} Dec 03 18:13:27 crc kubenswrapper[4687]: I1203 18:13:27.384335 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" event={"ID":"ff93c8d7-1225-45d9-952c-f770d7ad7e33","Type":"ContainerStarted","Data":"38f807749d9221a95d5fb4d678058cf13721f99f5d46180c7a193d770a2673f8"} Dec 03 18:13:27 crc kubenswrapper[4687]: I1203 18:13:27.404317 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" podStartSLOduration=1.873420039 podStartE2EDuration="2.404298578s" podCreationTimestamp="2025-12-03 18:13:25 +0000 UTC" firstStartedPulling="2025-12-03 18:13:26.328579533 +0000 UTC m=+2039.219274966" lastFinishedPulling="2025-12-03 18:13:26.859458082 +0000 UTC m=+2039.750153505" observedRunningTime="2025-12-03 18:13:27.40029879 +0000 UTC m=+2040.290994233" watchObservedRunningTime="2025-12-03 18:13:27.404298578 +0000 UTC m=+2040.294994011" Dec 03 18:13:28 crc kubenswrapper[4687]: I1203 18:13:28.944914 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:28 crc kubenswrapper[4687]: I1203 18:13:28.954752 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:28 crc kubenswrapper[4687]: I1203 18:13:28.959940 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.037750 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.037841 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.037896 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46wx\" (UniqueName: \"kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.140005 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.140074 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k46wx\" (UniqueName: \"kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.140211 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.140638 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.140939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.160649 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k46wx\" (UniqueName: \"kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx\") pod \"redhat-operators-hcksj\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.277474 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:29 crc kubenswrapper[4687]: W1203 18:13:29.759771 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod824f6b78_2f1a_47a7_b63f_8a0126c69d65.slice/crio-7d44ec269f484078df80488df8bc2bf41218a092aa38798907a8f3068188bf8b WatchSource:0}: Error finding container 7d44ec269f484078df80488df8bc2bf41218a092aa38798907a8f3068188bf8b: Status 404 returned error can't find the container with id 7d44ec269f484078df80488df8bc2bf41218a092aa38798907a8f3068188bf8b Dec 03 18:13:29 crc kubenswrapper[4687]: I1203 18:13:29.771558 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:30 crc kubenswrapper[4687]: I1203 18:13:30.424669 4687 generic.go:334] "Generic (PLEG): container finished" podID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerID="3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76" exitCode=0 Dec 03 18:13:30 crc kubenswrapper[4687]: I1203 18:13:30.424774 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerDied","Data":"3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76"} Dec 03 18:13:30 crc kubenswrapper[4687]: I1203 18:13:30.424994 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerStarted","Data":"7d44ec269f484078df80488df8bc2bf41218a092aa38798907a8f3068188bf8b"} Dec 03 18:13:32 crc kubenswrapper[4687]: I1203 18:13:32.443827 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerStarted","Data":"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c"} Dec 03 18:13:34 crc kubenswrapper[4687]: I1203 18:13:34.463825 4687 generic.go:334] "Generic (PLEG): container finished" podID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerID="a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c" exitCode=0 Dec 03 18:13:34 crc kubenswrapper[4687]: I1203 18:13:34.463916 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerDied","Data":"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c"} Dec 03 18:13:35 crc kubenswrapper[4687]: I1203 18:13:35.474653 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerStarted","Data":"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08"} Dec 03 18:13:35 crc kubenswrapper[4687]: I1203 18:13:35.501345 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcksj" podStartSLOduration=2.674644986 podStartE2EDuration="7.501327101s" podCreationTimestamp="2025-12-03 18:13:28 +0000 UTC" firstStartedPulling="2025-12-03 18:13:30.427109375 +0000 UTC m=+2043.317804808" lastFinishedPulling="2025-12-03 18:13:35.25379148 +0000 UTC m=+2048.144486923" observedRunningTime="2025-12-03 18:13:35.494950388 +0000 UTC m=+2048.385645821" watchObservedRunningTime="2025-12-03 18:13:35.501327101 +0000 UTC m=+2048.392022534" Dec 03 18:13:39 crc kubenswrapper[4687]: I1203 18:13:39.278322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:39 crc kubenswrapper[4687]: I1203 18:13:39.278850 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:40 crc kubenswrapper[4687]: I1203 18:13:40.325868 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hcksj" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="registry-server" probeResult="failure" output=< Dec 03 18:13:40 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:13:40 crc kubenswrapper[4687]: > Dec 03 18:13:49 crc kubenswrapper[4687]: I1203 18:13:49.340877 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:49 crc kubenswrapper[4687]: I1203 18:13:49.390754 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:49 crc kubenswrapper[4687]: I1203 18:13:49.574333 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:50 crc kubenswrapper[4687]: I1203 18:13:50.598969 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcksj" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="registry-server" containerID="cri-o://c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08" gracePeriod=2 Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.089468 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.183107 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k46wx\" (UniqueName: \"kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx\") pod \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.183274 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities\") pod \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.183406 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content\") pod \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\" (UID: \"824f6b78-2f1a-47a7-b63f-8a0126c69d65\") " Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.184295 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities" (OuterVolumeSpecName: "utilities") pod "824f6b78-2f1a-47a7-b63f-8a0126c69d65" (UID: "824f6b78-2f1a-47a7-b63f-8a0126c69d65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.190442 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx" (OuterVolumeSpecName: "kube-api-access-k46wx") pod "824f6b78-2f1a-47a7-b63f-8a0126c69d65" (UID: "824f6b78-2f1a-47a7-b63f-8a0126c69d65"). InnerVolumeSpecName "kube-api-access-k46wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.285598 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k46wx\" (UniqueName: \"kubernetes.io/projected/824f6b78-2f1a-47a7-b63f-8a0126c69d65-kube-api-access-k46wx\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.285641 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.297956 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "824f6b78-2f1a-47a7-b63f-8a0126c69d65" (UID: "824f6b78-2f1a-47a7-b63f-8a0126c69d65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.387046 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/824f6b78-2f1a-47a7-b63f-8a0126c69d65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.609373 4687 generic.go:334] "Generic (PLEG): container finished" podID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerID="c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08" exitCode=0 Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.609425 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerDied","Data":"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08"} Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.609459 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcksj" event={"ID":"824f6b78-2f1a-47a7-b63f-8a0126c69d65","Type":"ContainerDied","Data":"7d44ec269f484078df80488df8bc2bf41218a092aa38798907a8f3068188bf8b"} Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.609481 4687 scope.go:117] "RemoveContainer" containerID="c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.609483 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcksj" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.636015 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.640478 4687 scope.go:117] "RemoveContainer" containerID="a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.645037 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcksj"] Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.664395 4687 scope.go:117] "RemoveContainer" containerID="3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.708016 4687 scope.go:117] "RemoveContainer" containerID="c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08" Dec 03 18:13:51 crc kubenswrapper[4687]: E1203 18:13:51.710426 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08\": container with ID starting with c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08 not found: ID does not exist" containerID="c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.710490 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08"} err="failed to get container status \"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08\": rpc error: code = NotFound desc = could not find container \"c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08\": container with ID starting with c8adb5c9055fa814cd092aa8d5b27803ef952928c391d43a999c6fb7c8d84a08 not found: ID does not exist" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.710518 4687 scope.go:117] "RemoveContainer" containerID="a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c" Dec 03 18:13:51 crc kubenswrapper[4687]: E1203 18:13:51.710837 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c\": container with ID starting with a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c not found: ID does not exist" containerID="a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.710881 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c"} err="failed to get container status \"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c\": rpc error: code = NotFound desc = could not find container \"a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c\": container with ID starting with a679ac955f77e4a08c0329b5222879dafe2f77e5520e9c87bcb8409a043aee5c not found: ID does not exist" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.710908 4687 scope.go:117] "RemoveContainer" containerID="3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76" Dec 03 18:13:51 crc kubenswrapper[4687]: E1203 18:13:51.711188 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76\": container with ID starting with 3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76 not found: ID does not exist" containerID="3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76" Dec 03 18:13:51 crc kubenswrapper[4687]: I1203 18:13:51.711207 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76"} err="failed to get container status \"3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76\": rpc error: code = NotFound desc = could not find container \"3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76\": container with ID starting with 3da60cd01bd8853c2b7dbab6d259af93b0eef091ed71425fbb4ba04d624a9b76 not found: ID does not exist" Dec 03 18:13:53 crc kubenswrapper[4687]: I1203 18:13:53.424594 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" path="/var/lib/kubelet/pods/824f6b78-2f1a-47a7-b63f-8a0126c69d65/volumes" Dec 03 18:14:08 crc kubenswrapper[4687]: I1203 18:14:08.755480 4687 generic.go:334] "Generic (PLEG): container finished" podID="ff93c8d7-1225-45d9-952c-f770d7ad7e33" containerID="38f807749d9221a95d5fb4d678058cf13721f99f5d46180c7a193d770a2673f8" exitCode=0 Dec 03 18:14:08 crc kubenswrapper[4687]: I1203 18:14:08.755573 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" event={"ID":"ff93c8d7-1225-45d9-952c-f770d7ad7e33","Type":"ContainerDied","Data":"38f807749d9221a95d5fb4d678058cf13721f99f5d46180c7a193d770a2673f8"} Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.155699 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247326 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247393 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247446 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247490 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247537 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247558 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247594 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247617 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247675 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247747 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247768 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ct4x\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247784 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.247829 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key\") pod \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\" (UID: \"ff93c8d7-1225-45d9-952c-f770d7ad7e33\") " Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.253864 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.253922 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.260775 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.260853 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.260906 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.260920 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.260942 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.261151 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.261598 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.261630 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.262357 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.267284 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x" (OuterVolumeSpecName: "kube-api-access-8ct4x") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "kube-api-access-8ct4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.281834 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory" (OuterVolumeSpecName: "inventory") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.288207 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff93c8d7-1225-45d9-952c-f770d7ad7e33" (UID: "ff93c8d7-1225-45d9-952c-f770d7ad7e33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350623 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350683 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350698 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350712 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350726 4687 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350738 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350752 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350767 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ct4x\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-kube-api-access-8ct4x\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350779 4687 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350790 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350801 4687 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350811 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350823 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/ff93c8d7-1225-45d9-952c-f770d7ad7e33-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.350837 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff93c8d7-1225-45d9-952c-f770d7ad7e33-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.774103 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" event={"ID":"ff93c8d7-1225-45d9-952c-f770d7ad7e33","Type":"ContainerDied","Data":"f9e3e57213d4c7e85a52ddc4a23d2686f538ddef98dcba9913224826a0c2d523"} Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.774461 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e3e57213d4c7e85a52ddc4a23d2686f538ddef98dcba9913224826a0c2d523" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.774294 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.890540 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2"] Dec 03 18:14:10 crc kubenswrapper[4687]: E1203 18:14:10.891005 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff93c8d7-1225-45d9-952c-f770d7ad7e33" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891031 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff93c8d7-1225-45d9-952c-f770d7ad7e33" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 18:14:10 crc kubenswrapper[4687]: E1203 18:14:10.891045 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="registry-server" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891054 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="registry-server" Dec 03 18:14:10 crc kubenswrapper[4687]: E1203 18:14:10.891073 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="extract-content" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891082 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="extract-content" Dec 03 18:14:10 crc kubenswrapper[4687]: E1203 18:14:10.891099 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="extract-utilities" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891106 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="extract-utilities" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891342 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="824f6b78-2f1a-47a7-b63f-8a0126c69d65" containerName="registry-server" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.891359 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff93c8d7-1225-45d9-952c-f770d7ad7e33" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.892149 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.895862 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.896057 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.896174 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.896256 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.896304 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.902023 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2"] Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.963300 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.963388 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.963409 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.963454 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz6p\" (UniqueName: \"kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:10 crc kubenswrapper[4687]: I1203 18:14:10.963547 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.065273 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.065553 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.065752 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.065781 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.065946 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz6p\" (UniqueName: \"kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.066620 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.071161 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.076362 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.077089 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.084534 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz6p\" (UniqueName: \"kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tqzn2\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.209754 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.677584 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.685544 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.691432 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.784226 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.784336 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.784389 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85dm\" (UniqueName: \"kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.885750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.885870 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85dm\" (UniqueName: \"kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.885947 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.886388 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.886525 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.897360 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2"] Dec 03 18:14:11 crc kubenswrapper[4687]: I1203 18:14:11.918585 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85dm\" (UniqueName: \"kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm\") pod \"community-operators-gdz6j\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.003818 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.531155 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.797415 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" event={"ID":"cf4db291-8ad7-4e7e-8843-29e3287b05ca","Type":"ContainerStarted","Data":"f76cc8e828ab9dcbfb68cf94c780111ace0504856cd8e1eb176c79a7fc4a1091"} Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.797783 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" event={"ID":"cf4db291-8ad7-4e7e-8843-29e3287b05ca","Type":"ContainerStarted","Data":"d8ee53f2590969f8a7ca34bf96873fe677bb8473828db5d6a312c44200c5d9e6"} Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.800057 4687 generic.go:334] "Generic (PLEG): container finished" podID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerID="6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29" exitCode=0 Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.800096 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerDied","Data":"6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29"} Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.800158 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerStarted","Data":"3ad288129a762303d970983f2f1cc44cc2febe707b5828bf8f3b8ca9d1fa4ab0"} Dec 03 18:14:12 crc kubenswrapper[4687]: I1203 18:14:12.823001 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" podStartSLOduration=2.404947039 podStartE2EDuration="2.822980302s" podCreationTimestamp="2025-12-03 18:14:10 +0000 UTC" firstStartedPulling="2025-12-03 18:14:11.901238594 +0000 UTC m=+2084.791934027" lastFinishedPulling="2025-12-03 18:14:12.319271857 +0000 UTC m=+2085.209967290" observedRunningTime="2025-12-03 18:14:12.813897098 +0000 UTC m=+2085.704592531" watchObservedRunningTime="2025-12-03 18:14:12.822980302 +0000 UTC m=+2085.713675735" Dec 03 18:14:13 crc kubenswrapper[4687]: I1203 18:14:13.810319 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerStarted","Data":"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340"} Dec 03 18:14:14 crc kubenswrapper[4687]: I1203 18:14:14.820525 4687 generic.go:334] "Generic (PLEG): container finished" podID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerID="5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340" exitCode=0 Dec 03 18:14:14 crc kubenswrapper[4687]: I1203 18:14:14.820594 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerDied","Data":"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340"} Dec 03 18:14:15 crc kubenswrapper[4687]: I1203 18:14:15.833433 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerStarted","Data":"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99"} Dec 03 18:14:15 crc kubenswrapper[4687]: I1203 18:14:15.862652 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdz6j" podStartSLOduration=2.441521518 podStartE2EDuration="4.862627624s" podCreationTimestamp="2025-12-03 18:14:11 +0000 UTC" firstStartedPulling="2025-12-03 18:14:12.801491393 +0000 UTC m=+2085.692186826" lastFinishedPulling="2025-12-03 18:14:15.222597489 +0000 UTC m=+2088.113292932" observedRunningTime="2025-12-03 18:14:15.853936999 +0000 UTC m=+2088.744632432" watchObservedRunningTime="2025-12-03 18:14:15.862627624 +0000 UTC m=+2088.753323067" Dec 03 18:14:22 crc kubenswrapper[4687]: I1203 18:14:22.004163 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:22 crc kubenswrapper[4687]: I1203 18:14:22.004664 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:22 crc kubenswrapper[4687]: I1203 18:14:22.052185 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:22 crc kubenswrapper[4687]: I1203 18:14:22.944916 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:23 crc kubenswrapper[4687]: I1203 18:14:23.016987 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:24 crc kubenswrapper[4687]: I1203 18:14:24.927388 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gdz6j" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="registry-server" containerID="cri-o://7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99" gracePeriod=2 Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.925697 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.938026 4687 generic.go:334] "Generic (PLEG): container finished" podID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerID="7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99" exitCode=0 Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.938089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerDied","Data":"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99"} Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.938097 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdz6j" Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.938155 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdz6j" event={"ID":"7662d5a5-51ac-46e6-ad1b-c767192b7650","Type":"ContainerDied","Data":"3ad288129a762303d970983f2f1cc44cc2febe707b5828bf8f3b8ca9d1fa4ab0"} Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.938178 4687 scope.go:117] "RemoveContainer" containerID="7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99" Dec 03 18:14:25 crc kubenswrapper[4687]: I1203 18:14:25.970445 4687 scope.go:117] "RemoveContainer" containerID="5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.020598 4687 scope.go:117] "RemoveContainer" containerID="6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.050887 4687 scope.go:117] "RemoveContainer" containerID="7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99" Dec 03 18:14:26 crc kubenswrapper[4687]: E1203 18:14:26.051326 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99\": container with ID starting with 7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99 not found: ID does not exist" containerID="7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.051479 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99"} err="failed to get container status \"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99\": rpc error: code = NotFound desc = could not find container \"7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99\": container with ID starting with 7e792c51289a71b5e389b616b2a81995312ac615a4000eeddb400e3287267c99 not found: ID does not exist" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.051513 4687 scope.go:117] "RemoveContainer" containerID="5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340" Dec 03 18:14:26 crc kubenswrapper[4687]: E1203 18:14:26.051896 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340\": container with ID starting with 5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340 not found: ID does not exist" containerID="5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.051930 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340"} err="failed to get container status \"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340\": rpc error: code = NotFound desc = could not find container \"5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340\": container with ID starting with 5dfafef0af330871cc3b297db70dbc14814285b405caf7503d5d7bd981872340 not found: ID does not exist" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.051957 4687 scope.go:117] "RemoveContainer" containerID="6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29" Dec 03 18:14:26 crc kubenswrapper[4687]: E1203 18:14:26.052270 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29\": container with ID starting with 6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29 not found: ID does not exist" containerID="6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.052338 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29"} err="failed to get container status \"6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29\": rpc error: code = NotFound desc = could not find container \"6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29\": container with ID starting with 6ab3303f6fd92be654ab1cc027383a43f795ab22b0f98f357fbd2761d6651a29 not found: ID does not exist" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.090596 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities\") pod \"7662d5a5-51ac-46e6-ad1b-c767192b7650\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.090662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content\") pod \"7662d5a5-51ac-46e6-ad1b-c767192b7650\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.090720 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85dm\" (UniqueName: \"kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm\") pod \"7662d5a5-51ac-46e6-ad1b-c767192b7650\" (UID: \"7662d5a5-51ac-46e6-ad1b-c767192b7650\") " Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.092235 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities" (OuterVolumeSpecName: "utilities") pod "7662d5a5-51ac-46e6-ad1b-c767192b7650" (UID: "7662d5a5-51ac-46e6-ad1b-c767192b7650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.109470 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm" (OuterVolumeSpecName: "kube-api-access-m85dm") pod "7662d5a5-51ac-46e6-ad1b-c767192b7650" (UID: "7662d5a5-51ac-46e6-ad1b-c767192b7650"). InnerVolumeSpecName "kube-api-access-m85dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.150084 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7662d5a5-51ac-46e6-ad1b-c767192b7650" (UID: "7662d5a5-51ac-46e6-ad1b-c767192b7650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.192946 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.192981 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7662d5a5-51ac-46e6-ad1b-c767192b7650-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.192999 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85dm\" (UniqueName: \"kubernetes.io/projected/7662d5a5-51ac-46e6-ad1b-c767192b7650-kube-api-access-m85dm\") on node \"crc\" DevicePath \"\"" Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.279682 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:26 crc kubenswrapper[4687]: I1203 18:14:26.288511 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gdz6j"] Dec 03 18:14:27 crc kubenswrapper[4687]: I1203 18:14:27.422266 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" path="/var/lib/kubelet/pods/7662d5a5-51ac-46e6-ad1b-c767192b7650/volumes" Dec 03 18:14:44 crc kubenswrapper[4687]: I1203 18:14:44.111675 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:14:44 crc kubenswrapper[4687]: I1203 18:14:44.112389 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.146751 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq"] Dec 03 18:15:00 crc kubenswrapper[4687]: E1203 18:15:00.147905 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="registry-server" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.147923 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="registry-server" Dec 03 18:15:00 crc kubenswrapper[4687]: E1203 18:15:00.147945 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="extract-utilities" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.147954 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="extract-utilities" Dec 03 18:15:00 crc kubenswrapper[4687]: E1203 18:15:00.148001 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="extract-content" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.148012 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="extract-content" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.148262 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7662d5a5-51ac-46e6-ad1b-c767192b7650" containerName="registry-server" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.149045 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.153051 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.153261 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.160872 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq"] Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.263747 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.263806 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78gk\" (UniqueName: \"kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.264298 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.366303 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.366416 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.366449 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v78gk\" (UniqueName: \"kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.367596 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.372753 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.383510 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78gk\" (UniqueName: \"kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk\") pod \"collect-profiles-29413095-w5lzq\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.492465 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:00 crc kubenswrapper[4687]: I1203 18:15:00.938320 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq"] Dec 03 18:15:00 crc kubenswrapper[4687]: W1203 18:15:00.943876 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde363e74_3a08_4bba_b12c_4cbeeffad444.slice/crio-c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3 WatchSource:0}: Error finding container c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3: Status 404 returned error can't find the container with id c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3 Dec 03 18:15:01 crc kubenswrapper[4687]: I1203 18:15:01.448747 4687 generic.go:334] "Generic (PLEG): container finished" podID="de363e74-3a08-4bba-b12c-4cbeeffad444" containerID="fbe8d5fa0310ac20be1611439a0dbfaaf26516e63aa69f9f992af690d6fbc1db" exitCode=0 Dec 03 18:15:01 crc kubenswrapper[4687]: I1203 18:15:01.448865 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" event={"ID":"de363e74-3a08-4bba-b12c-4cbeeffad444","Type":"ContainerDied","Data":"fbe8d5fa0310ac20be1611439a0dbfaaf26516e63aa69f9f992af690d6fbc1db"} Dec 03 18:15:01 crc kubenswrapper[4687]: I1203 18:15:01.449073 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" event={"ID":"de363e74-3a08-4bba-b12c-4cbeeffad444","Type":"ContainerStarted","Data":"c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3"} Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.786856 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.814202 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume\") pod \"de363e74-3a08-4bba-b12c-4cbeeffad444\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.814263 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume\") pod \"de363e74-3a08-4bba-b12c-4cbeeffad444\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.814294 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v78gk\" (UniqueName: \"kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk\") pod \"de363e74-3a08-4bba-b12c-4cbeeffad444\" (UID: \"de363e74-3a08-4bba-b12c-4cbeeffad444\") " Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.815925 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume" (OuterVolumeSpecName: "config-volume") pod "de363e74-3a08-4bba-b12c-4cbeeffad444" (UID: "de363e74-3a08-4bba-b12c-4cbeeffad444"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.821277 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk" (OuterVolumeSpecName: "kube-api-access-v78gk") pod "de363e74-3a08-4bba-b12c-4cbeeffad444" (UID: "de363e74-3a08-4bba-b12c-4cbeeffad444"). InnerVolumeSpecName "kube-api-access-v78gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.821656 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de363e74-3a08-4bba-b12c-4cbeeffad444" (UID: "de363e74-3a08-4bba-b12c-4cbeeffad444"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.917099 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de363e74-3a08-4bba-b12c-4cbeeffad444-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.917150 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de363e74-3a08-4bba-b12c-4cbeeffad444-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:02 crc kubenswrapper[4687]: I1203 18:15:02.917163 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v78gk\" (UniqueName: \"kubernetes.io/projected/de363e74-3a08-4bba-b12c-4cbeeffad444-kube-api-access-v78gk\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:03 crc kubenswrapper[4687]: I1203 18:15:03.471634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" event={"ID":"de363e74-3a08-4bba-b12c-4cbeeffad444","Type":"ContainerDied","Data":"c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3"} Dec 03 18:15:03 crc kubenswrapper[4687]: I1203 18:15:03.471921 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq" Dec 03 18:15:03 crc kubenswrapper[4687]: I1203 18:15:03.471949 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2cb679de195c66a3677b7f15c92e98ca345b16ee4e55282e5fa7fe345acdcc3" Dec 03 18:15:03 crc kubenswrapper[4687]: I1203 18:15:03.878905 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm"] Dec 03 18:15:03 crc kubenswrapper[4687]: I1203 18:15:03.886504 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413050-vk7fm"] Dec 03 18:15:05 crc kubenswrapper[4687]: I1203 18:15:05.418064 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c2c1d3-31da-423e-8e09-8d11382908b5" path="/var/lib/kubelet/pods/15c2c1d3-31da-423e-8e09-8d11382908b5/volumes" Dec 03 18:15:14 crc kubenswrapper[4687]: I1203 18:15:14.112195 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:15:14 crc kubenswrapper[4687]: I1203 18:15:14.112783 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:15:23 crc kubenswrapper[4687]: E1203 18:15:23.030933 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4db291_8ad7_4e7e_8843_29e3287b05ca.slice/crio-f76cc8e828ab9dcbfb68cf94c780111ace0504856cd8e1eb176c79a7fc4a1091.scope\": RecentStats: unable to find data in memory cache]" Dec 03 18:15:23 crc kubenswrapper[4687]: I1203 18:15:23.699763 4687 generic.go:334] "Generic (PLEG): container finished" podID="cf4db291-8ad7-4e7e-8843-29e3287b05ca" containerID="f76cc8e828ab9dcbfb68cf94c780111ace0504856cd8e1eb176c79a7fc4a1091" exitCode=0 Dec 03 18:15:23 crc kubenswrapper[4687]: I1203 18:15:23.699857 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" event={"ID":"cf4db291-8ad7-4e7e-8843-29e3287b05ca","Type":"ContainerDied","Data":"f76cc8e828ab9dcbfb68cf94c780111ace0504856cd8e1eb176c79a7fc4a1091"} Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.179050 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.293411 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key\") pod \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.293527 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle\") pod \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.293662 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory\") pod \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.293730 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0\") pod \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.293800 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pz6p\" (UniqueName: \"kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p\") pod \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\" (UID: \"cf4db291-8ad7-4e7e-8843-29e3287b05ca\") " Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.303427 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p" (OuterVolumeSpecName: "kube-api-access-9pz6p") pod "cf4db291-8ad7-4e7e-8843-29e3287b05ca" (UID: "cf4db291-8ad7-4e7e-8843-29e3287b05ca"). InnerVolumeSpecName "kube-api-access-9pz6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.307329 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cf4db291-8ad7-4e7e-8843-29e3287b05ca" (UID: "cf4db291-8ad7-4e7e-8843-29e3287b05ca"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.322332 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "cf4db291-8ad7-4e7e-8843-29e3287b05ca" (UID: "cf4db291-8ad7-4e7e-8843-29e3287b05ca"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.331446 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cf4db291-8ad7-4e7e-8843-29e3287b05ca" (UID: "cf4db291-8ad7-4e7e-8843-29e3287b05ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.332404 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory" (OuterVolumeSpecName: "inventory") pod "cf4db291-8ad7-4e7e-8843-29e3287b05ca" (UID: "cf4db291-8ad7-4e7e-8843-29e3287b05ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.398049 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.398707 4687 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.398736 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pz6p\" (UniqueName: \"kubernetes.io/projected/cf4db291-8ad7-4e7e-8843-29e3287b05ca-kube-api-access-9pz6p\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.398756 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.398783 4687 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4db291-8ad7-4e7e-8843-29e3287b05ca-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.721872 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" event={"ID":"cf4db291-8ad7-4e7e-8843-29e3287b05ca","Type":"ContainerDied","Data":"d8ee53f2590969f8a7ca34bf96873fe677bb8473828db5d6a312c44200c5d9e6"} Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.721925 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8ee53f2590969f8a7ca34bf96873fe677bb8473828db5d6a312c44200c5d9e6" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.721978 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tqzn2" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.862976 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6"] Dec 03 18:15:25 crc kubenswrapper[4687]: E1203 18:15:25.863837 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de363e74-3a08-4bba-b12c-4cbeeffad444" containerName="collect-profiles" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.863857 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="de363e74-3a08-4bba-b12c-4cbeeffad444" containerName="collect-profiles" Dec 03 18:15:25 crc kubenswrapper[4687]: E1203 18:15:25.863894 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4db291-8ad7-4e7e-8843-29e3287b05ca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.863902 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4db291-8ad7-4e7e-8843-29e3287b05ca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.864075 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4db291-8ad7-4e7e-8843-29e3287b05ca" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.864104 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="de363e74-3a08-4bba-b12c-4cbeeffad444" containerName="collect-profiles" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.864887 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.868026 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.868092 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.868378 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.873414 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.873422 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.873412 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 03 18:15:25 crc kubenswrapper[4687]: I1203 18:15:25.897269 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6"] Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.015801 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.015848 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.015896 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.015950 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.015987 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4bns\" (UniqueName: \"kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.016022 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117131 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117212 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117254 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4bns\" (UniqueName: \"kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117293 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117359 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.117378 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.122624 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.122858 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.123565 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.123739 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.124578 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.138630 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4bns\" (UniqueName: \"kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.185251 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.736111 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:15:26 crc kubenswrapper[4687]: I1203 18:15:26.752799 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6"] Dec 03 18:15:27 crc kubenswrapper[4687]: I1203 18:15:27.471421 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:15:27 crc kubenswrapper[4687]: I1203 18:15:27.745814 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" event={"ID":"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502","Type":"ContainerStarted","Data":"8c3c3f9bf444570dcaee89df0e90a1fed8b8d3b5b00248dea1a6261b9af1540b"} Dec 03 18:15:27 crc kubenswrapper[4687]: I1203 18:15:27.746213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" event={"ID":"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502","Type":"ContainerStarted","Data":"86045d17a1edd19daeab06418565835d84ab2ed2e2a3e743aea646391e5686b7"} Dec 03 18:15:27 crc kubenswrapper[4687]: I1203 18:15:27.769823 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" podStartSLOduration=2.038331706 podStartE2EDuration="2.769797289s" podCreationTimestamp="2025-12-03 18:15:25 +0000 UTC" firstStartedPulling="2025-12-03 18:15:26.735854342 +0000 UTC m=+2159.626549775" lastFinishedPulling="2025-12-03 18:15:27.467319925 +0000 UTC m=+2160.358015358" observedRunningTime="2025-12-03 18:15:27.764216628 +0000 UTC m=+2160.654912091" watchObservedRunningTime="2025-12-03 18:15:27.769797289 +0000 UTC m=+2160.660492752" Dec 03 18:15:41 crc kubenswrapper[4687]: I1203 18:15:41.020291 4687 scope.go:117] "RemoveContainer" containerID="73a9aa36634349ca1d5198141060e352391b5bf96283439536260eb00a6afb4a" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.112407 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.113104 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.113291 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.114217 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.114285 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" gracePeriod=600 Dec 03 18:15:44 crc kubenswrapper[4687]: E1203 18:15:44.247230 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.912668 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" exitCode=0 Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.912730 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560"} Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.912772 4687 scope.go:117] "RemoveContainer" containerID="d66cebd5b418cf6cdda6fc2f2a3e9bb11e29fa2a7592a975e56efd8c42700ccd" Dec 03 18:15:44 crc kubenswrapper[4687]: I1203 18:15:44.913567 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:15:44 crc kubenswrapper[4687]: E1203 18:15:44.913893 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:15:57 crc kubenswrapper[4687]: I1203 18:15:57.412250 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:15:57 crc kubenswrapper[4687]: E1203 18:15:57.413051 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:16:09 crc kubenswrapper[4687]: I1203 18:16:09.407940 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:16:09 crc kubenswrapper[4687]: E1203 18:16:09.408781 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:16:20 crc kubenswrapper[4687]: I1203 18:16:20.284020 4687 generic.go:334] "Generic (PLEG): container finished" podID="7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" containerID="8c3c3f9bf444570dcaee89df0e90a1fed8b8d3b5b00248dea1a6261b9af1540b" exitCode=0 Dec 03 18:16:20 crc kubenswrapper[4687]: I1203 18:16:20.284712 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" event={"ID":"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502","Type":"ContainerDied","Data":"8c3c3f9bf444570dcaee89df0e90a1fed8b8d3b5b00248dea1a6261b9af1540b"} Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.408215 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:16:21 crc kubenswrapper[4687]: E1203 18:16:21.408851 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.792031 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940214 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940309 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940386 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940491 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4bns\" (UniqueName: \"kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940577 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.940603 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle\") pod \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\" (UID: \"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502\") " Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.947557 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns" (OuterVolumeSpecName: "kube-api-access-m4bns") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "kube-api-access-m4bns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.950231 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.974538 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:16:21 crc kubenswrapper[4687]: I1203 18:16:21.989228 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.000770 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory" (OuterVolumeSpecName: "inventory") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.007416 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" (UID: "7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043005 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043057 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043077 4687 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043096 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4bns\" (UniqueName: \"kubernetes.io/projected/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-kube-api-access-m4bns\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043115 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.043154 4687 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.307603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" event={"ID":"7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502","Type":"ContainerDied","Data":"86045d17a1edd19daeab06418565835d84ab2ed2e2a3e743aea646391e5686b7"} Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.307643 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86045d17a1edd19daeab06418565835d84ab2ed2e2a3e743aea646391e5686b7" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.307717 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.431552 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp"] Dec 03 18:16:22 crc kubenswrapper[4687]: E1203 18:16:22.432529 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.432582 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.433113 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.434929 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.436779 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.437659 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.437877 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.437982 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.438146 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.453583 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp"] Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.457566 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.457613 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7ss\" (UniqueName: \"kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.457657 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.457711 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.457760 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.558964 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.559323 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7ss\" (UniqueName: \"kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.559361 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.559418 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.559480 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.565149 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.567003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.574785 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.586144 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.592827 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7ss\" (UniqueName: \"kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:22 crc kubenswrapper[4687]: I1203 18:16:22.792295 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:16:23 crc kubenswrapper[4687]: I1203 18:16:23.305840 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp"] Dec 03 18:16:23 crc kubenswrapper[4687]: I1203 18:16:23.317427 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" event={"ID":"e3ca0b80-1626-411c-b15c-c66f1f18cf9e","Type":"ContainerStarted","Data":"a12816ed8585f373262fd512ae37598f393c9fb3cd25ebac9e3962402fb65a08"} Dec 03 18:16:24 crc kubenswrapper[4687]: I1203 18:16:24.330450 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" event={"ID":"e3ca0b80-1626-411c-b15c-c66f1f18cf9e","Type":"ContainerStarted","Data":"fc24de29867a17fb0464208f68d37d8a30cc9c1b419397b660f3ee73824ba8bd"} Dec 03 18:16:24 crc kubenswrapper[4687]: I1203 18:16:24.354769 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" podStartSLOduration=1.952350737 podStartE2EDuration="2.354753868s" podCreationTimestamp="2025-12-03 18:16:22 +0000 UTC" firstStartedPulling="2025-12-03 18:16:23.306980598 +0000 UTC m=+2216.197676031" lastFinishedPulling="2025-12-03 18:16:23.709383719 +0000 UTC m=+2216.600079162" observedRunningTime="2025-12-03 18:16:24.351285344 +0000 UTC m=+2217.241980777" watchObservedRunningTime="2025-12-03 18:16:24.354753868 +0000 UTC m=+2217.245449301" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.016695 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.019492 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.029505 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.159916 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.160048 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6x9\" (UniqueName: \"kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.160135 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.262054 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6x9\" (UniqueName: \"kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.262463 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.262498 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.263003 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.263101 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.291347 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6x9\" (UniqueName: \"kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9\") pod \"certified-operators-fvh28\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.358903 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:32 crc kubenswrapper[4687]: I1203 18:16:32.926733 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:32 crc kubenswrapper[4687]: W1203 18:16:32.934088 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2021c32_a6a8_4dd6_a3ba_3ca784941348.slice/crio-a592643e13dfb9b9d3e64b5f0ef2c6e6eff157239ce10652137ddea2a0b3146d WatchSource:0}: Error finding container a592643e13dfb9b9d3e64b5f0ef2c6e6eff157239ce10652137ddea2a0b3146d: Status 404 returned error can't find the container with id a592643e13dfb9b9d3e64b5f0ef2c6e6eff157239ce10652137ddea2a0b3146d Dec 03 18:16:33 crc kubenswrapper[4687]: I1203 18:16:33.433629 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerID="e2f940015f3eefaa1649afac73cbf8580be94e5be250d42e7a6c17afc5283114" exitCode=0 Dec 03 18:16:33 crc kubenswrapper[4687]: I1203 18:16:33.433882 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerDied","Data":"e2f940015f3eefaa1649afac73cbf8580be94e5be250d42e7a6c17afc5283114"} Dec 03 18:16:33 crc kubenswrapper[4687]: I1203 18:16:33.434063 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerStarted","Data":"a592643e13dfb9b9d3e64b5f0ef2c6e6eff157239ce10652137ddea2a0b3146d"} Dec 03 18:16:34 crc kubenswrapper[4687]: I1203 18:16:34.407334 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:16:34 crc kubenswrapper[4687]: E1203 18:16:34.408067 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:16:34 crc kubenswrapper[4687]: I1203 18:16:34.458668 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerStarted","Data":"19a537c121f934eace413e230387de70209163bebb8f19e4083188a970d9b171"} Dec 03 18:16:35 crc kubenswrapper[4687]: I1203 18:16:35.473391 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerID="19a537c121f934eace413e230387de70209163bebb8f19e4083188a970d9b171" exitCode=0 Dec 03 18:16:35 crc kubenswrapper[4687]: I1203 18:16:35.473487 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerDied","Data":"19a537c121f934eace413e230387de70209163bebb8f19e4083188a970d9b171"} Dec 03 18:16:36 crc kubenswrapper[4687]: I1203 18:16:36.493597 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerStarted","Data":"cb23068b07657ee1484437a645d82582964b40658e5d6e82859aed9e48dab0bc"} Dec 03 18:16:36 crc kubenswrapper[4687]: I1203 18:16:36.522018 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fvh28" podStartSLOduration=3.086743699 podStartE2EDuration="5.522002529s" podCreationTimestamp="2025-12-03 18:16:31 +0000 UTC" firstStartedPulling="2025-12-03 18:16:33.438214375 +0000 UTC m=+2226.328909818" lastFinishedPulling="2025-12-03 18:16:35.873473215 +0000 UTC m=+2228.764168648" observedRunningTime="2025-12-03 18:16:36.518617097 +0000 UTC m=+2229.409312530" watchObservedRunningTime="2025-12-03 18:16:36.522002529 +0000 UTC m=+2229.412697962" Dec 03 18:16:42 crc kubenswrapper[4687]: I1203 18:16:42.359795 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:42 crc kubenswrapper[4687]: I1203 18:16:42.360416 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:42 crc kubenswrapper[4687]: I1203 18:16:42.424547 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:42 crc kubenswrapper[4687]: I1203 18:16:42.606677 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:45 crc kubenswrapper[4687]: I1203 18:16:45.985267 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:45 crc kubenswrapper[4687]: I1203 18:16:45.986001 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fvh28" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="registry-server" containerID="cri-o://cb23068b07657ee1484437a645d82582964b40658e5d6e82859aed9e48dab0bc" gracePeriod=2 Dec 03 18:16:46 crc kubenswrapper[4687]: I1203 18:16:46.611277 4687 generic.go:334] "Generic (PLEG): container finished" podID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerID="cb23068b07657ee1484437a645d82582964b40658e5d6e82859aed9e48dab0bc" exitCode=0 Dec 03 18:16:46 crc kubenswrapper[4687]: I1203 18:16:46.611321 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerDied","Data":"cb23068b07657ee1484437a645d82582964b40658e5d6e82859aed9e48dab0bc"} Dec 03 18:16:46 crc kubenswrapper[4687]: I1203 18:16:46.993744 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.081984 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities\") pod \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.082110 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content\") pod \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.082215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6x9\" (UniqueName: \"kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9\") pod \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\" (UID: \"f2021c32-a6a8-4dd6-a3ba-3ca784941348\") " Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.086187 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities" (OuterVolumeSpecName: "utilities") pod "f2021c32-a6a8-4dd6-a3ba-3ca784941348" (UID: "f2021c32-a6a8-4dd6-a3ba-3ca784941348"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.088990 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9" (OuterVolumeSpecName: "kube-api-access-ph6x9") pod "f2021c32-a6a8-4dd6-a3ba-3ca784941348" (UID: "f2021c32-a6a8-4dd6-a3ba-3ca784941348"). InnerVolumeSpecName "kube-api-access-ph6x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.134617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2021c32-a6a8-4dd6-a3ba-3ca784941348" (UID: "f2021c32-a6a8-4dd6-a3ba-3ca784941348"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.183955 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.184263 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2021c32-a6a8-4dd6-a3ba-3ca784941348-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.184331 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6x9\" (UniqueName: \"kubernetes.io/projected/f2021c32-a6a8-4dd6-a3ba-3ca784941348-kube-api-access-ph6x9\") on node \"crc\" DevicePath \"\"" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.627147 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fvh28" event={"ID":"f2021c32-a6a8-4dd6-a3ba-3ca784941348","Type":"ContainerDied","Data":"a592643e13dfb9b9d3e64b5f0ef2c6e6eff157239ce10652137ddea2a0b3146d"} Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.627481 4687 scope.go:117] "RemoveContainer" containerID="cb23068b07657ee1484437a645d82582964b40658e5d6e82859aed9e48dab0bc" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.627664 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fvh28" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.657925 4687 scope.go:117] "RemoveContainer" containerID="19a537c121f934eace413e230387de70209163bebb8f19e4083188a970d9b171" Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.662818 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.680429 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fvh28"] Dec 03 18:16:47 crc kubenswrapper[4687]: I1203 18:16:47.698167 4687 scope.go:117] "RemoveContainer" containerID="e2f940015f3eefaa1649afac73cbf8580be94e5be250d42e7a6c17afc5283114" Dec 03 18:16:48 crc kubenswrapper[4687]: I1203 18:16:48.407692 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:16:48 crc kubenswrapper[4687]: E1203 18:16:48.408170 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:16:49 crc kubenswrapper[4687]: I1203 18:16:49.425701 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" path="/var/lib/kubelet/pods/f2021c32-a6a8-4dd6-a3ba-3ca784941348/volumes" Dec 03 18:17:02 crc kubenswrapper[4687]: I1203 18:17:02.408961 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:17:02 crc kubenswrapper[4687]: E1203 18:17:02.410437 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:17:17 crc kubenswrapper[4687]: I1203 18:17:17.420989 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:17:17 crc kubenswrapper[4687]: E1203 18:17:17.422349 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:17:32 crc kubenswrapper[4687]: I1203 18:17:32.408324 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:17:32 crc kubenswrapper[4687]: E1203 18:17:32.409914 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:17:46 crc kubenswrapper[4687]: I1203 18:17:46.406791 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:17:46 crc kubenswrapper[4687]: E1203 18:17:46.407621 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:18:01 crc kubenswrapper[4687]: I1203 18:18:01.407875 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:18:01 crc kubenswrapper[4687]: E1203 18:18:01.408739 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.310571 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:10 crc kubenswrapper[4687]: E1203 18:18:10.311649 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="extract-utilities" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.311824 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="extract-utilities" Dec 03 18:18:10 crc kubenswrapper[4687]: E1203 18:18:10.311845 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="extract-content" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.311853 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="extract-content" Dec 03 18:18:10 crc kubenswrapper[4687]: E1203 18:18:10.311880 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="registry-server" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.311891 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="registry-server" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.312424 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2021c32-a6a8-4dd6-a3ba-3ca784941348" containerName="registry-server" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.314182 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.319596 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.487285 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqkwb\" (UniqueName: \"kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.487433 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.487543 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.589612 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.589759 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqkwb\" (UniqueName: \"kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.589888 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.590329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.590329 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.623692 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqkwb\" (UniqueName: \"kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb\") pod \"redhat-marketplace-zdbcx\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:10 crc kubenswrapper[4687]: I1203 18:18:10.638322 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:11 crc kubenswrapper[4687]: I1203 18:18:11.113949 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:11 crc kubenswrapper[4687]: W1203 18:18:11.125015 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1165b15f_8cbe_4d5d_950d_8e47600f3497.slice/crio-082e69d868c1e39c42a5fe5f0f09819ddf26a39f74a0b04b8ae7a784a9bd80c2 WatchSource:0}: Error finding container 082e69d868c1e39c42a5fe5f0f09819ddf26a39f74a0b04b8ae7a784a9bd80c2: Status 404 returned error can't find the container with id 082e69d868c1e39c42a5fe5f0f09819ddf26a39f74a0b04b8ae7a784a9bd80c2 Dec 03 18:18:11 crc kubenswrapper[4687]: I1203 18:18:11.447689 4687 generic.go:334] "Generic (PLEG): container finished" podID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerID="aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26" exitCode=0 Dec 03 18:18:11 crc kubenswrapper[4687]: I1203 18:18:11.447750 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerDied","Data":"aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26"} Dec 03 18:18:11 crc kubenswrapper[4687]: I1203 18:18:11.447959 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerStarted","Data":"082e69d868c1e39c42a5fe5f0f09819ddf26a39f74a0b04b8ae7a784a9bd80c2"} Dec 03 18:18:12 crc kubenswrapper[4687]: I1203 18:18:12.458511 4687 generic.go:334] "Generic (PLEG): container finished" podID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerID="ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb" exitCode=0 Dec 03 18:18:12 crc kubenswrapper[4687]: I1203 18:18:12.458621 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerDied","Data":"ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb"} Dec 03 18:18:13 crc kubenswrapper[4687]: I1203 18:18:13.470755 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerStarted","Data":"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a"} Dec 03 18:18:13 crc kubenswrapper[4687]: I1203 18:18:13.494348 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zdbcx" podStartSLOduration=2.084950704 podStartE2EDuration="3.494324512s" podCreationTimestamp="2025-12-03 18:18:10 +0000 UTC" firstStartedPulling="2025-12-03 18:18:11.449739412 +0000 UTC m=+2324.340434835" lastFinishedPulling="2025-12-03 18:18:12.85911318 +0000 UTC m=+2325.749808643" observedRunningTime="2025-12-03 18:18:13.490409526 +0000 UTC m=+2326.381104959" watchObservedRunningTime="2025-12-03 18:18:13.494324512 +0000 UTC m=+2326.385019945" Dec 03 18:18:14 crc kubenswrapper[4687]: I1203 18:18:14.407725 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:18:14 crc kubenswrapper[4687]: E1203 18:18:14.408406 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:18:20 crc kubenswrapper[4687]: I1203 18:18:20.639287 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:20 crc kubenswrapper[4687]: I1203 18:18:20.640459 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:20 crc kubenswrapper[4687]: I1203 18:18:20.703934 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:21 crc kubenswrapper[4687]: I1203 18:18:21.642304 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:21 crc kubenswrapper[4687]: I1203 18:18:21.710424 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:23 crc kubenswrapper[4687]: I1203 18:18:23.586289 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zdbcx" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="registry-server" containerID="cri-o://7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a" gracePeriod=2 Dec 03 18:18:23 crc kubenswrapper[4687]: I1203 18:18:23.983636 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.099481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content\") pod \"1165b15f-8cbe-4d5d-950d-8e47600f3497\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.099680 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqkwb\" (UniqueName: \"kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb\") pod \"1165b15f-8cbe-4d5d-950d-8e47600f3497\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.099785 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities\") pod \"1165b15f-8cbe-4d5d-950d-8e47600f3497\" (UID: \"1165b15f-8cbe-4d5d-950d-8e47600f3497\") " Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.101133 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities" (OuterVolumeSpecName: "utilities") pod "1165b15f-8cbe-4d5d-950d-8e47600f3497" (UID: "1165b15f-8cbe-4d5d-950d-8e47600f3497"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.106498 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb" (OuterVolumeSpecName: "kube-api-access-rqkwb") pod "1165b15f-8cbe-4d5d-950d-8e47600f3497" (UID: "1165b15f-8cbe-4d5d-950d-8e47600f3497"). InnerVolumeSpecName "kube-api-access-rqkwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.122904 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1165b15f-8cbe-4d5d-950d-8e47600f3497" (UID: "1165b15f-8cbe-4d5d-950d-8e47600f3497"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.202805 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.202837 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqkwb\" (UniqueName: \"kubernetes.io/projected/1165b15f-8cbe-4d5d-950d-8e47600f3497-kube-api-access-rqkwb\") on node \"crc\" DevicePath \"\"" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.202848 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1165b15f-8cbe-4d5d-950d-8e47600f3497-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.599199 4687 generic.go:334] "Generic (PLEG): container finished" podID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerID="7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a" exitCode=0 Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.599287 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zdbcx" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.600535 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerDied","Data":"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a"} Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.600705 4687 scope.go:117] "RemoveContainer" containerID="7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.600934 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zdbcx" event={"ID":"1165b15f-8cbe-4d5d-950d-8e47600f3497","Type":"ContainerDied","Data":"082e69d868c1e39c42a5fe5f0f09819ddf26a39f74a0b04b8ae7a784a9bd80c2"} Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.650294 4687 scope.go:117] "RemoveContainer" containerID="ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.654071 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.674593 4687 scope.go:117] "RemoveContainer" containerID="aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.675636 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zdbcx"] Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.730369 4687 scope.go:117] "RemoveContainer" containerID="7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a" Dec 03 18:18:24 crc kubenswrapper[4687]: E1203 18:18:24.731989 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a\": container with ID starting with 7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a not found: ID does not exist" containerID="7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.732091 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a"} err="failed to get container status \"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a\": rpc error: code = NotFound desc = could not find container \"7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a\": container with ID starting with 7e4d89def6d017a5c403b8b0a63f907ba04b707a0705cc30d2d7246645d8a65a not found: ID does not exist" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.732242 4687 scope.go:117] "RemoveContainer" containerID="ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb" Dec 03 18:18:24 crc kubenswrapper[4687]: E1203 18:18:24.733361 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb\": container with ID starting with ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb not found: ID does not exist" containerID="ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.733417 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb"} err="failed to get container status \"ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb\": rpc error: code = NotFound desc = could not find container \"ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb\": container with ID starting with ba82ae2c736164ecd213f850767a7dfbf0cd225941b3d801c1cff924c58bbdbb not found: ID does not exist" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.733453 4687 scope.go:117] "RemoveContainer" containerID="aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26" Dec 03 18:18:24 crc kubenswrapper[4687]: E1203 18:18:24.734406 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26\": container with ID starting with aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26 not found: ID does not exist" containerID="aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26" Dec 03 18:18:24 crc kubenswrapper[4687]: I1203 18:18:24.734455 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26"} err="failed to get container status \"aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26\": rpc error: code = NotFound desc = could not find container \"aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26\": container with ID starting with aa8f957dbc78b072d62043cab14167227e54406bd313df0ae6f8818cd505fd26 not found: ID does not exist" Dec 03 18:18:25 crc kubenswrapper[4687]: I1203 18:18:25.408258 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:18:25 crc kubenswrapper[4687]: E1203 18:18:25.408834 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:18:25 crc kubenswrapper[4687]: I1203 18:18:25.424414 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" path="/var/lib/kubelet/pods/1165b15f-8cbe-4d5d-950d-8e47600f3497/volumes" Dec 03 18:18:40 crc kubenswrapper[4687]: I1203 18:18:40.407513 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:18:40 crc kubenswrapper[4687]: E1203 18:18:40.408395 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:18:53 crc kubenswrapper[4687]: I1203 18:18:53.434417 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:18:53 crc kubenswrapper[4687]: E1203 18:18:53.435461 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:19:07 crc kubenswrapper[4687]: I1203 18:19:07.415634 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:19:07 crc kubenswrapper[4687]: E1203 18:19:07.416618 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:19:19 crc kubenswrapper[4687]: I1203 18:19:19.407261 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:19:19 crc kubenswrapper[4687]: E1203 18:19:19.408000 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:19:33 crc kubenswrapper[4687]: I1203 18:19:33.407728 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:19:33 crc kubenswrapper[4687]: E1203 18:19:33.408602 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:19:48 crc kubenswrapper[4687]: I1203 18:19:48.407701 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:19:48 crc kubenswrapper[4687]: E1203 18:19:48.408441 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:20:03 crc kubenswrapper[4687]: I1203 18:20:03.407415 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:20:03 crc kubenswrapper[4687]: E1203 18:20:03.408192 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:20:18 crc kubenswrapper[4687]: I1203 18:20:18.408821 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:20:18 crc kubenswrapper[4687]: E1203 18:20:18.410063 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:20:30 crc kubenswrapper[4687]: I1203 18:20:30.407786 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:20:30 crc kubenswrapper[4687]: E1203 18:20:30.408600 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:20:45 crc kubenswrapper[4687]: I1203 18:20:45.407585 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:20:46 crc kubenswrapper[4687]: I1203 18:20:46.041333 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4"} Dec 03 18:21:05 crc kubenswrapper[4687]: I1203 18:21:05.273675 4687 generic.go:334] "Generic (PLEG): container finished" podID="e3ca0b80-1626-411c-b15c-c66f1f18cf9e" containerID="fc24de29867a17fb0464208f68d37d8a30cc9c1b419397b660f3ee73824ba8bd" exitCode=0 Dec 03 18:21:05 crc kubenswrapper[4687]: I1203 18:21:05.273764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" event={"ID":"e3ca0b80-1626-411c-b15c-c66f1f18cf9e","Type":"ContainerDied","Data":"fc24de29867a17fb0464208f68d37d8a30cc9c1b419397b660f3ee73824ba8bd"} Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.799500 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.934042 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0\") pod \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.934258 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory\") pod \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.934488 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v7ss\" (UniqueName: \"kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss\") pod \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.934556 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle\") pod \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.934660 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key\") pod \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\" (UID: \"e3ca0b80-1626-411c-b15c-c66f1f18cf9e\") " Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.941836 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e3ca0b80-1626-411c-b15c-c66f1f18cf9e" (UID: "e3ca0b80-1626-411c-b15c-c66f1f18cf9e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.942759 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss" (OuterVolumeSpecName: "kube-api-access-9v7ss") pod "e3ca0b80-1626-411c-b15c-c66f1f18cf9e" (UID: "e3ca0b80-1626-411c-b15c-c66f1f18cf9e"). InnerVolumeSpecName "kube-api-access-9v7ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.976223 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e3ca0b80-1626-411c-b15c-c66f1f18cf9e" (UID: "e3ca0b80-1626-411c-b15c-c66f1f18cf9e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.977851 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory" (OuterVolumeSpecName: "inventory") pod "e3ca0b80-1626-411c-b15c-c66f1f18cf9e" (UID: "e3ca0b80-1626-411c-b15c-c66f1f18cf9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:21:06 crc kubenswrapper[4687]: I1203 18:21:06.980627 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3ca0b80-1626-411c-b15c-c66f1f18cf9e" (UID: "e3ca0b80-1626-411c-b15c-c66f1f18cf9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.037416 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.037502 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.037522 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.037539 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v7ss\" (UniqueName: \"kubernetes.io/projected/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-kube-api-access-9v7ss\") on node \"crc\" DevicePath \"\"" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.037556 4687 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3ca0b80-1626-411c-b15c-c66f1f18cf9e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.300100 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" event={"ID":"e3ca0b80-1626-411c-b15c-c66f1f18cf9e","Type":"ContainerDied","Data":"a12816ed8585f373262fd512ae37598f393c9fb3cd25ebac9e3962402fb65a08"} Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.300174 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12816ed8585f373262fd512ae37598f393c9fb3cd25ebac9e3962402fb65a08" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.300204 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.443800 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg"] Dec 03 18:21:07 crc kubenswrapper[4687]: E1203 18:21:07.444360 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="extract-content" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444379 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="extract-content" Dec 03 18:21:07 crc kubenswrapper[4687]: E1203 18:21:07.444404 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ca0b80-1626-411c-b15c-c66f1f18cf9e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444416 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ca0b80-1626-411c-b15c-c66f1f18cf9e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 18:21:07 crc kubenswrapper[4687]: E1203 18:21:07.444428 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="registry-server" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444436 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="registry-server" Dec 03 18:21:07 crc kubenswrapper[4687]: E1203 18:21:07.444455 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="extract-utilities" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444464 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="extract-utilities" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444762 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="1165b15f-8cbe-4d5d-950d-8e47600f3497" containerName="registry-server" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.444786 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ca0b80-1626-411c-b15c-c66f1f18cf9e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.445627 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.448838 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.449926 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.450057 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.450205 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.450249 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.450391 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.450409 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.456933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg"] Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.563788 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.564740 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.564828 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.564952 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.564988 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.565097 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.565159 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhhh\" (UniqueName: \"kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.565307 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.565387 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667439 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667527 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667613 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667654 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhhh\" (UniqueName: \"kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667761 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667812 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667876 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667926 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.667980 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.669371 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.672366 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.673244 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.673915 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.678166 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.678457 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.679094 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.679499 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.689382 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhhh\" (UniqueName: \"kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh\") pod \"nova-edpm-deployment-openstack-edpm-ipam-l9stg\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:07 crc kubenswrapper[4687]: I1203 18:21:07.766309 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:21:08 crc kubenswrapper[4687]: I1203 18:21:08.275293 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg"] Dec 03 18:21:08 crc kubenswrapper[4687]: W1203 18:21:08.280293 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90387c4a_7957_4b6a_983a_0608fe7a0977.slice/crio-a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7 WatchSource:0}: Error finding container a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7: Status 404 returned error can't find the container with id a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7 Dec 03 18:21:08 crc kubenswrapper[4687]: I1203 18:21:08.282025 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:21:08 crc kubenswrapper[4687]: I1203 18:21:08.309051 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" event={"ID":"90387c4a-7957-4b6a-983a-0608fe7a0977","Type":"ContainerStarted","Data":"a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7"} Dec 03 18:21:09 crc kubenswrapper[4687]: I1203 18:21:09.329932 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" event={"ID":"90387c4a-7957-4b6a-983a-0608fe7a0977","Type":"ContainerStarted","Data":"ee256b3cab89ee9e15ee4cfe7ac381807a5591d68cc56d663ced7a83d8844652"} Dec 03 18:21:09 crc kubenswrapper[4687]: I1203 18:21:09.355431 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" podStartSLOduration=1.893698469 podStartE2EDuration="2.35540888s" podCreationTimestamp="2025-12-03 18:21:07 +0000 UTC" firstStartedPulling="2025-12-03 18:21:08.281826568 +0000 UTC m=+2501.172522001" lastFinishedPulling="2025-12-03 18:21:08.743536989 +0000 UTC m=+2501.634232412" observedRunningTime="2025-12-03 18:21:09.351470644 +0000 UTC m=+2502.242166087" watchObservedRunningTime="2025-12-03 18:21:09.35540888 +0000 UTC m=+2502.246104323" Dec 03 18:23:14 crc kubenswrapper[4687]: I1203 18:23:14.112072 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:23:14 crc kubenswrapper[4687]: I1203 18:23:14.112768 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:23:40 crc kubenswrapper[4687]: I1203 18:23:40.884904 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:23:40 crc kubenswrapper[4687]: I1203 18:23:40.888609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:40 crc kubenswrapper[4687]: I1203 18:23:40.905084 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.070526 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.070837 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726fm\" (UniqueName: \"kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.070993 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.175904 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726fm\" (UniqueName: \"kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.175995 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.176154 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.176623 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.176763 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.200615 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726fm\" (UniqueName: \"kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm\") pod \"redhat-operators-v9c6f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.222015 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:41 crc kubenswrapper[4687]: I1203 18:23:41.707067 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:23:42 crc kubenswrapper[4687]: I1203 18:23:42.177856 4687 generic.go:334] "Generic (PLEG): container finished" podID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerID="4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716" exitCode=0 Dec 03 18:23:42 crc kubenswrapper[4687]: I1203 18:23:42.177905 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerDied","Data":"4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716"} Dec 03 18:23:42 crc kubenswrapper[4687]: I1203 18:23:42.178228 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerStarted","Data":"8808c2924481996ae53500ceea2b9478157ad01dd9efe165b5a5040114c1d760"} Dec 03 18:23:44 crc kubenswrapper[4687]: I1203 18:23:44.111581 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:23:44 crc kubenswrapper[4687]: I1203 18:23:44.112209 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:23:44 crc kubenswrapper[4687]: I1203 18:23:44.203482 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerStarted","Data":"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3"} Dec 03 18:23:47 crc kubenswrapper[4687]: I1203 18:23:47.264484 4687 generic.go:334] "Generic (PLEG): container finished" podID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerID="4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3" exitCode=0 Dec 03 18:23:47 crc kubenswrapper[4687]: I1203 18:23:47.265089 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerDied","Data":"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3"} Dec 03 18:23:49 crc kubenswrapper[4687]: I1203 18:23:49.288906 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerStarted","Data":"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2"} Dec 03 18:23:49 crc kubenswrapper[4687]: I1203 18:23:49.309438 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v9c6f" podStartSLOduration=3.307111935 podStartE2EDuration="9.309416448s" podCreationTimestamp="2025-12-03 18:23:40 +0000 UTC" firstStartedPulling="2025-12-03 18:23:42.179656974 +0000 UTC m=+2655.070352407" lastFinishedPulling="2025-12-03 18:23:48.181961487 +0000 UTC m=+2661.072656920" observedRunningTime="2025-12-03 18:23:49.306727785 +0000 UTC m=+2662.197423228" watchObservedRunningTime="2025-12-03 18:23:49.309416448 +0000 UTC m=+2662.200111881" Dec 03 18:23:51 crc kubenswrapper[4687]: I1203 18:23:51.223078 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:51 crc kubenswrapper[4687]: I1203 18:23:51.223190 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:23:52 crc kubenswrapper[4687]: I1203 18:23:52.296031 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v9c6f" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="registry-server" probeResult="failure" output=< Dec 03 18:23:52 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:23:52 crc kubenswrapper[4687]: > Dec 03 18:24:01 crc kubenswrapper[4687]: I1203 18:24:01.266833 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:24:01 crc kubenswrapper[4687]: I1203 18:24:01.316667 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:24:01 crc kubenswrapper[4687]: I1203 18:24:01.502476 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:24:02 crc kubenswrapper[4687]: I1203 18:24:02.416321 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v9c6f" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="registry-server" containerID="cri-o://2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2" gracePeriod=2 Dec 03 18:24:02 crc kubenswrapper[4687]: I1203 18:24:02.857952 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.056718 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content\") pod \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.057160 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities\") pod \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.058068 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities" (OuterVolumeSpecName: "utilities") pod "2c4b195f-99ac-452d-ab7e-e31bbf5a383f" (UID: "2c4b195f-99ac-452d-ab7e-e31bbf5a383f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.063642 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-726fm\" (UniqueName: \"kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm\") pod \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\" (UID: \"2c4b195f-99ac-452d-ab7e-e31bbf5a383f\") " Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.064375 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.070335 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm" (OuterVolumeSpecName: "kube-api-access-726fm") pod "2c4b195f-99ac-452d-ab7e-e31bbf5a383f" (UID: "2c4b195f-99ac-452d-ab7e-e31bbf5a383f"). InnerVolumeSpecName "kube-api-access-726fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.168048 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-726fm\" (UniqueName: \"kubernetes.io/projected/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-kube-api-access-726fm\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.168658 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c4b195f-99ac-452d-ab7e-e31bbf5a383f" (UID: "2c4b195f-99ac-452d-ab7e-e31bbf5a383f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.269546 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4b195f-99ac-452d-ab7e-e31bbf5a383f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.427029 4687 generic.go:334] "Generic (PLEG): container finished" podID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerID="2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2" exitCode=0 Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.427359 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerDied","Data":"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2"} Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.427405 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v9c6f" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.427439 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v9c6f" event={"ID":"2c4b195f-99ac-452d-ab7e-e31bbf5a383f","Type":"ContainerDied","Data":"8808c2924481996ae53500ceea2b9478157ad01dd9efe165b5a5040114c1d760"} Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.427462 4687 scope.go:117] "RemoveContainer" containerID="2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.458937 4687 scope.go:117] "RemoveContainer" containerID="4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.466329 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.475600 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v9c6f"] Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.490516 4687 scope.go:117] "RemoveContainer" containerID="4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.543970 4687 scope.go:117] "RemoveContainer" containerID="2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2" Dec 03 18:24:03 crc kubenswrapper[4687]: E1203 18:24:03.544397 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2\": container with ID starting with 2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2 not found: ID does not exist" containerID="2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.544435 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2"} err="failed to get container status \"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2\": rpc error: code = NotFound desc = could not find container \"2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2\": container with ID starting with 2c171e0887efcd0e2d9b31c35ea588d960f4efc500a9b0f13ff74594325a2af2 not found: ID does not exist" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.544459 4687 scope.go:117] "RemoveContainer" containerID="4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3" Dec 03 18:24:03 crc kubenswrapper[4687]: E1203 18:24:03.544732 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3\": container with ID starting with 4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3 not found: ID does not exist" containerID="4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.544802 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3"} err="failed to get container status \"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3\": rpc error: code = NotFound desc = could not find container \"4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3\": container with ID starting with 4e2c37089bbedc852e06831a6a03c8c092ff0f9764a78abfd2c5334ac9a299e3 not found: ID does not exist" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.544827 4687 scope.go:117] "RemoveContainer" containerID="4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716" Dec 03 18:24:03 crc kubenswrapper[4687]: E1203 18:24:03.545346 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716\": container with ID starting with 4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716 not found: ID does not exist" containerID="4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716" Dec 03 18:24:03 crc kubenswrapper[4687]: I1203 18:24:03.545366 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716"} err="failed to get container status \"4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716\": rpc error: code = NotFound desc = could not find container \"4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716\": container with ID starting with 4c527fdaa8f96ce2c9eea5ae5c74392451467eeb35e93ab630a5154e419a7716 not found: ID does not exist" Dec 03 18:24:05 crc kubenswrapper[4687]: I1203 18:24:05.425367 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" path="/var/lib/kubelet/pods/2c4b195f-99ac-452d-ab7e-e31bbf5a383f/volumes" Dec 03 18:24:13 crc kubenswrapper[4687]: I1203 18:24:13.517077 4687 generic.go:334] "Generic (PLEG): container finished" podID="90387c4a-7957-4b6a-983a-0608fe7a0977" containerID="ee256b3cab89ee9e15ee4cfe7ac381807a5591d68cc56d663ced7a83d8844652" exitCode=0 Dec 03 18:24:13 crc kubenswrapper[4687]: I1203 18:24:13.517179 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" event={"ID":"90387c4a-7957-4b6a-983a-0608fe7a0977","Type":"ContainerDied","Data":"ee256b3cab89ee9e15ee4cfe7ac381807a5591d68cc56d663ced7a83d8844652"} Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.112198 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.112387 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.112520 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.114170 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.114303 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4" gracePeriod=600 Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.529779 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4" exitCode=0 Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.529842 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4"} Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.530335 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91"} Dec 03 18:24:14 crc kubenswrapper[4687]: I1203 18:24:14.530357 4687 scope.go:117] "RemoveContainer" containerID="0eb9283ac71d0469c81074f45904a67ab90f864c97e60fbe068ccde80be6d560" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.100698 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.199813 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.199913 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.199968 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.199999 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.200171 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.200209 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klhhh\" (UniqueName: \"kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.200253 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.200300 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.200322 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0\") pod \"90387c4a-7957-4b6a-983a-0608fe7a0977\" (UID: \"90387c4a-7957-4b6a-983a-0608fe7a0977\") " Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.205801 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.226682 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh" (OuterVolumeSpecName: "kube-api-access-klhhh") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "kube-api-access-klhhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.236606 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.238413 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory" (OuterVolumeSpecName: "inventory") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.243225 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.243599 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.247335 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.254831 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.256103 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "90387c4a-7957-4b6a-983a-0608fe7a0977" (UID: "90387c4a-7957-4b6a-983a-0608fe7a0977"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.306556 4687 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.306811 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.306883 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.306946 4687 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.307004 4687 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.307062 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klhhh\" (UniqueName: \"kubernetes.io/projected/90387c4a-7957-4b6a-983a-0608fe7a0977-kube-api-access-klhhh\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.307151 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.307217 4687 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.307274 4687 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/90387c4a-7957-4b6a-983a-0608fe7a0977-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.546257 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" event={"ID":"90387c4a-7957-4b6a-983a-0608fe7a0977","Type":"ContainerDied","Data":"a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7"} Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.546312 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-l9stg" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.546333 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3dc59ccea24cbab501acf44cc6d73470d94e312ef088f34f1aa0a4af21528b7" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.654731 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j"] Dec 03 18:24:15 crc kubenswrapper[4687]: E1203 18:24:15.655078 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="registry-server" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.655094 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="registry-server" Dec 03 18:24:15 crc kubenswrapper[4687]: E1203 18:24:15.657327 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="extract-content" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.657350 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="extract-content" Dec 03 18:24:15 crc kubenswrapper[4687]: E1203 18:24:15.657384 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="extract-utilities" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.657391 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="extract-utilities" Dec 03 18:24:15 crc kubenswrapper[4687]: E1203 18:24:15.657399 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90387c4a-7957-4b6a-983a-0608fe7a0977" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.657406 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="90387c4a-7957-4b6a-983a-0608fe7a0977" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.657639 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4b195f-99ac-452d-ab7e-e31bbf5a383f" containerName="registry-server" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.657656 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="90387c4a-7957-4b6a-983a-0608fe7a0977" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.658377 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.661079 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.661085 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.661252 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.661545 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.661782 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7tptj" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.664850 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j"] Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817014 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817269 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817492 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817546 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817767 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.817819 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjkr\" (UniqueName: \"kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.919837 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920189 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920302 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920350 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjkr\" (UniqueName: \"kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920536 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920580 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.920645 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.926929 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.927043 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.928286 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.928389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.930071 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.938054 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.941996 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjkr\" (UniqueName: \"kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4c62j\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:15 crc kubenswrapper[4687]: I1203 18:24:15.984011 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:24:16 crc kubenswrapper[4687]: W1203 18:24:16.365517 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ce84a46_82bc_42a8_b645_d801d2a8edff.slice/crio-a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513 WatchSource:0}: Error finding container a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513: Status 404 returned error can't find the container with id a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513 Dec 03 18:24:16 crc kubenswrapper[4687]: I1203 18:24:16.369933 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j"] Dec 03 18:24:16 crc kubenswrapper[4687]: I1203 18:24:16.554647 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" event={"ID":"0ce84a46-82bc-42a8-b645-d801d2a8edff","Type":"ContainerStarted","Data":"a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513"} Dec 03 18:24:17 crc kubenswrapper[4687]: I1203 18:24:17.571869 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" event={"ID":"0ce84a46-82bc-42a8-b645-d801d2a8edff","Type":"ContainerStarted","Data":"cc69be9153811cb35853f55661c4f574865ecf39c5108d98f2b2562bd08a28b6"} Dec 03 18:24:17 crc kubenswrapper[4687]: I1203 18:24:17.603045 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" podStartSLOduration=2.196973883 podStartE2EDuration="2.603024168s" podCreationTimestamp="2025-12-03 18:24:15 +0000 UTC" firstStartedPulling="2025-12-03 18:24:16.368355096 +0000 UTC m=+2689.259050569" lastFinishedPulling="2025-12-03 18:24:16.774405421 +0000 UTC m=+2689.665100854" observedRunningTime="2025-12-03 18:24:17.594277122 +0000 UTC m=+2690.484972585" watchObservedRunningTime="2025-12-03 18:24:17.603024168 +0000 UTC m=+2690.493719601" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.054400 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.058190 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.080800 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.145898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzvh4\" (UniqueName: \"kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.146237 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.146538 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.248854 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.249003 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzvh4\" (UniqueName: \"kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.249094 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.249582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.249675 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.269646 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzvh4\" (UniqueName: \"kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4\") pod \"community-operators-vjcpb\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.380821 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:07 crc kubenswrapper[4687]: I1203 18:25:07.672755 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:08 crc kubenswrapper[4687]: E1203 18:25:08.072756 4687 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ba22b3_aa84_4e74_badc_39306d399c51.slice/crio-772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95ba22b3_aa84_4e74_badc_39306d399c51.slice/crio-conmon-772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b.scope\": RecentStats: unable to find data in memory cache]" Dec 03 18:25:08 crc kubenswrapper[4687]: I1203 18:25:08.116112 4687 generic.go:334] "Generic (PLEG): container finished" podID="95ba22b3-aa84-4e74-badc-39306d399c51" containerID="772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b" exitCode=0 Dec 03 18:25:08 crc kubenswrapper[4687]: I1203 18:25:08.116231 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerDied","Data":"772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b"} Dec 03 18:25:08 crc kubenswrapper[4687]: I1203 18:25:08.116276 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerStarted","Data":"d29ff82084ce6756995d5a314d989c0c6832081d2ad12541311037d28bdc5dea"} Dec 03 18:25:09 crc kubenswrapper[4687]: I1203 18:25:09.125001 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerStarted","Data":"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678"} Dec 03 18:25:10 crc kubenswrapper[4687]: I1203 18:25:10.138674 4687 generic.go:334] "Generic (PLEG): container finished" podID="95ba22b3-aa84-4e74-badc-39306d399c51" containerID="81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678" exitCode=0 Dec 03 18:25:10 crc kubenswrapper[4687]: I1203 18:25:10.138740 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerDied","Data":"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678"} Dec 03 18:25:11 crc kubenswrapper[4687]: I1203 18:25:11.150392 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerStarted","Data":"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44"} Dec 03 18:25:11 crc kubenswrapper[4687]: I1203 18:25:11.176669 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vjcpb" podStartSLOduration=1.722461206 podStartE2EDuration="4.176654754s" podCreationTimestamp="2025-12-03 18:25:07 +0000 UTC" firstStartedPulling="2025-12-03 18:25:08.118006987 +0000 UTC m=+2741.008702410" lastFinishedPulling="2025-12-03 18:25:10.572200515 +0000 UTC m=+2743.462895958" observedRunningTime="2025-12-03 18:25:11.172919974 +0000 UTC m=+2744.063615417" watchObservedRunningTime="2025-12-03 18:25:11.176654754 +0000 UTC m=+2744.067350187" Dec 03 18:25:17 crc kubenswrapper[4687]: I1203 18:25:17.381641 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:17 crc kubenswrapper[4687]: I1203 18:25:17.382211 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:17 crc kubenswrapper[4687]: I1203 18:25:17.443748 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:18 crc kubenswrapper[4687]: I1203 18:25:18.287255 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:18 crc kubenswrapper[4687]: I1203 18:25:18.353507 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.254394 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vjcpb" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="registry-server" containerID="cri-o://34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44" gracePeriod=2 Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.779296 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.911269 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzvh4\" (UniqueName: \"kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4\") pod \"95ba22b3-aa84-4e74-badc-39306d399c51\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.911352 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content\") pod \"95ba22b3-aa84-4e74-badc-39306d399c51\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.911470 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities\") pod \"95ba22b3-aa84-4e74-badc-39306d399c51\" (UID: \"95ba22b3-aa84-4e74-badc-39306d399c51\") " Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.912744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities" (OuterVolumeSpecName: "utilities") pod "95ba22b3-aa84-4e74-badc-39306d399c51" (UID: "95ba22b3-aa84-4e74-badc-39306d399c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.918305 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4" (OuterVolumeSpecName: "kube-api-access-fzvh4") pod "95ba22b3-aa84-4e74-badc-39306d399c51" (UID: "95ba22b3-aa84-4e74-badc-39306d399c51"). InnerVolumeSpecName "kube-api-access-fzvh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:25:20 crc kubenswrapper[4687]: I1203 18:25:20.976430 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95ba22b3-aa84-4e74-badc-39306d399c51" (UID: "95ba22b3-aa84-4e74-badc-39306d399c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.014325 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzvh4\" (UniqueName: \"kubernetes.io/projected/95ba22b3-aa84-4e74-badc-39306d399c51-kube-api-access-fzvh4\") on node \"crc\" DevicePath \"\"" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.014560 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.014616 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ba22b3-aa84-4e74-badc-39306d399c51-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.263678 4687 generic.go:334] "Generic (PLEG): container finished" podID="95ba22b3-aa84-4e74-badc-39306d399c51" containerID="34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44" exitCode=0 Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.263731 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerDied","Data":"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44"} Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.263766 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vjcpb" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.264622 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vjcpb" event={"ID":"95ba22b3-aa84-4e74-badc-39306d399c51","Type":"ContainerDied","Data":"d29ff82084ce6756995d5a314d989c0c6832081d2ad12541311037d28bdc5dea"} Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.264712 4687 scope.go:117] "RemoveContainer" containerID="34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.293803 4687 scope.go:117] "RemoveContainer" containerID="81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.302803 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.314024 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vjcpb"] Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.330280 4687 scope.go:117] "RemoveContainer" containerID="772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.359078 4687 scope.go:117] "RemoveContainer" containerID="34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44" Dec 03 18:25:21 crc kubenswrapper[4687]: E1203 18:25:21.359650 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44\": container with ID starting with 34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44 not found: ID does not exist" containerID="34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.359685 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44"} err="failed to get container status \"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44\": rpc error: code = NotFound desc = could not find container \"34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44\": container with ID starting with 34fc542c0b324c18bb71038bb038ff71778870a76ab152d3c0b67f11c2a37d44 not found: ID does not exist" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.359705 4687 scope.go:117] "RemoveContainer" containerID="81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678" Dec 03 18:25:21 crc kubenswrapper[4687]: E1203 18:25:21.360069 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678\": container with ID starting with 81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678 not found: ID does not exist" containerID="81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.360090 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678"} err="failed to get container status \"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678\": rpc error: code = NotFound desc = could not find container \"81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678\": container with ID starting with 81beffa7c53a598915ce5f84d13607a4cf0427e33155213daf33827616153678 not found: ID does not exist" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.360104 4687 scope.go:117] "RemoveContainer" containerID="772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b" Dec 03 18:25:21 crc kubenswrapper[4687]: E1203 18:25:21.360410 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b\": container with ID starting with 772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b not found: ID does not exist" containerID="772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.360433 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b"} err="failed to get container status \"772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b\": rpc error: code = NotFound desc = could not find container \"772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b\": container with ID starting with 772752e9e32d44e4eb019367a5e99aa1057265be0af76086f4a3aaeebb2f719b not found: ID does not exist" Dec 03 18:25:21 crc kubenswrapper[4687]: I1203 18:25:21.420178 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" path="/var/lib/kubelet/pods/95ba22b3-aa84-4e74-badc-39306d399c51/volumes" Dec 03 18:26:14 crc kubenswrapper[4687]: I1203 18:26:14.111371 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:26:14 crc kubenswrapper[4687]: I1203 18:26:14.112049 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:26:44 crc kubenswrapper[4687]: I1203 18:26:44.112426 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:26:44 crc kubenswrapper[4687]: I1203 18:26:44.113425 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:26:56 crc kubenswrapper[4687]: I1203 18:26:56.232285 4687 generic.go:334] "Generic (PLEG): container finished" podID="0ce84a46-82bc-42a8-b645-d801d2a8edff" containerID="cc69be9153811cb35853f55661c4f574865ecf39c5108d98f2b2562bd08a28b6" exitCode=0 Dec 03 18:26:56 crc kubenswrapper[4687]: I1203 18:26:56.232466 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" event={"ID":"0ce84a46-82bc-42a8-b645-d801d2a8edff","Type":"ContainerDied","Data":"cc69be9153811cb35853f55661c4f574865ecf39c5108d98f2b2562bd08a28b6"} Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.732204 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.860886 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.860962 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxjkr\" (UniqueName: \"kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.860993 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.861045 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.861172 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.861220 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.861260 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1\") pod \"0ce84a46-82bc-42a8-b645-d801d2a8edff\" (UID: \"0ce84a46-82bc-42a8-b645-d801d2a8edff\") " Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.866685 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.870255 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr" (OuterVolumeSpecName: "kube-api-access-nxjkr") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "kube-api-access-nxjkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.888895 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory" (OuterVolumeSpecName: "inventory") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.889579 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.891553 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.898440 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.910974 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0ce84a46-82bc-42a8-b645-d801d2a8edff" (UID: "0ce84a46-82bc-42a8-b645-d801d2a8edff"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964461 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964502 4687 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-inventory\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964519 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964537 4687 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964555 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxjkr\" (UniqueName: \"kubernetes.io/projected/0ce84a46-82bc-42a8-b645-d801d2a8edff-kube-api-access-nxjkr\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964570 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:57 crc kubenswrapper[4687]: I1203 18:26:57.964585 4687 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0ce84a46-82bc-42a8-b645-d801d2a8edff-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 03 18:26:58 crc kubenswrapper[4687]: I1203 18:26:58.256821 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" event={"ID":"0ce84a46-82bc-42a8-b645-d801d2a8edff","Type":"ContainerDied","Data":"a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513"} Dec 03 18:26:58 crc kubenswrapper[4687]: I1203 18:26:58.256918 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8409b7d2e335c4191596e68b0186a9435b8d33c7339237da8505c557700b513" Dec 03 18:26:58 crc kubenswrapper[4687]: I1203 18:26:58.256917 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4c62j" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.975694 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:00 crc kubenswrapper[4687]: E1203 18:27:00.976780 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ce84a46-82bc-42a8-b645-d801d2a8edff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.976823 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ce84a46-82bc-42a8-b645-d801d2a8edff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 18:27:00 crc kubenswrapper[4687]: E1203 18:27:00.976856 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="registry-server" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.976862 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="registry-server" Dec 03 18:27:00 crc kubenswrapper[4687]: E1203 18:27:00.976879 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="extract-content" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.976885 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="extract-content" Dec 03 18:27:00 crc kubenswrapper[4687]: E1203 18:27:00.976900 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="extract-utilities" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.976908 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="extract-utilities" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.977081 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ce84a46-82bc-42a8-b645-d801d2a8edff" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.977093 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ba22b3-aa84-4e74-badc-39306d399c51" containerName="registry-server" Dec 03 18:27:00 crc kubenswrapper[4687]: I1203 18:27:00.978318 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.014451 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.032186 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.032373 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.032466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8q55\" (UniqueName: \"kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.134205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8q55\" (UniqueName: \"kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.134300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.134386 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.134808 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.135086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.163904 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8q55\" (UniqueName: \"kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55\") pod \"certified-operators-wp4s8\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.303994 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:01 crc kubenswrapper[4687]: I1203 18:27:01.901453 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:02 crc kubenswrapper[4687]: I1203 18:27:02.291575 4687 generic.go:334] "Generic (PLEG): container finished" podID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerID="42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf" exitCode=0 Dec 03 18:27:02 crc kubenswrapper[4687]: I1203 18:27:02.291690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerDied","Data":"42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf"} Dec 03 18:27:02 crc kubenswrapper[4687]: I1203 18:27:02.291943 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerStarted","Data":"37f361aed8d77066cca7a2858d00449d6d757d1812edf312412870d797e5df36"} Dec 03 18:27:02 crc kubenswrapper[4687]: I1203 18:27:02.293654 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:27:03 crc kubenswrapper[4687]: I1203 18:27:03.311467 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerStarted","Data":"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36"} Dec 03 18:27:04 crc kubenswrapper[4687]: I1203 18:27:04.322018 4687 generic.go:334] "Generic (PLEG): container finished" podID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerID="e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36" exitCode=0 Dec 03 18:27:04 crc kubenswrapper[4687]: I1203 18:27:04.322177 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerDied","Data":"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36"} Dec 03 18:27:05 crc kubenswrapper[4687]: I1203 18:27:05.334762 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerStarted","Data":"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a"} Dec 03 18:27:05 crc kubenswrapper[4687]: I1203 18:27:05.361872 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wp4s8" podStartSLOduration=2.937970156 podStartE2EDuration="5.361845647s" podCreationTimestamp="2025-12-03 18:27:00 +0000 UTC" firstStartedPulling="2025-12-03 18:27:02.293422885 +0000 UTC m=+2855.184118318" lastFinishedPulling="2025-12-03 18:27:04.717298386 +0000 UTC m=+2857.607993809" observedRunningTime="2025-12-03 18:27:05.354722025 +0000 UTC m=+2858.245417478" watchObservedRunningTime="2025-12-03 18:27:05.361845647 +0000 UTC m=+2858.252541090" Dec 03 18:27:11 crc kubenswrapper[4687]: E1203 18:27:11.295184 4687 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.130:34584->38.102.83.130:36177: write tcp 38.102.83.130:34584->38.102.83.130:36177: write: broken pipe Dec 03 18:27:11 crc kubenswrapper[4687]: I1203 18:27:11.305568 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:11 crc kubenswrapper[4687]: I1203 18:27:11.305892 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:11 crc kubenswrapper[4687]: I1203 18:27:11.374245 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:11 crc kubenswrapper[4687]: I1203 18:27:11.463584 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:11 crc kubenswrapper[4687]: I1203 18:27:11.617152 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:13 crc kubenswrapper[4687]: I1203 18:27:13.436381 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wp4s8" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="registry-server" containerID="cri-o://f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a" gracePeriod=2 Dec 03 18:27:13 crc kubenswrapper[4687]: I1203 18:27:13.908107 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.024312 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content\") pod \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.024434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8q55\" (UniqueName: \"kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55\") pod \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.024589 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities\") pod \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\" (UID: \"dc08c72b-3d88-4d20-b611-1a5a4d96a42e\") " Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.025812 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities" (OuterVolumeSpecName: "utilities") pod "dc08c72b-3d88-4d20-b611-1a5a4d96a42e" (UID: "dc08c72b-3d88-4d20-b611-1a5a4d96a42e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.035998 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55" (OuterVolumeSpecName: "kube-api-access-z8q55") pod "dc08c72b-3d88-4d20-b611-1a5a4d96a42e" (UID: "dc08c72b-3d88-4d20-b611-1a5a4d96a42e"). InnerVolumeSpecName "kube-api-access-z8q55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.111546 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.111610 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.111656 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.112683 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.112748 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" gracePeriod=600 Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.127253 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8q55\" (UniqueName: \"kubernetes.io/projected/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-kube-api-access-z8q55\") on node \"crc\" DevicePath \"\"" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.127288 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.196729 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc08c72b-3d88-4d20-b611-1a5a4d96a42e" (UID: "dc08c72b-3d88-4d20-b611-1a5a4d96a42e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.232523 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc08c72b-3d88-4d20-b611-1a5a4d96a42e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:27:14 crc kubenswrapper[4687]: E1203 18:27:14.238658 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.447312 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" exitCode=0 Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.447376 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91"} Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.447412 4687 scope.go:117] "RemoveContainer" containerID="b6692ff1d212118094927c791a4fa3f87932bae6aa68ba875d0eb8e42df513b4" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.449839 4687 generic.go:334] "Generic (PLEG): container finished" podID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerID="f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a" exitCode=0 Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.449862 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerDied","Data":"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a"} Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.449879 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wp4s8" event={"ID":"dc08c72b-3d88-4d20-b611-1a5a4d96a42e","Type":"ContainerDied","Data":"37f361aed8d77066cca7a2858d00449d6d757d1812edf312412870d797e5df36"} Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.449897 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wp4s8" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.450059 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:27:14 crc kubenswrapper[4687]: E1203 18:27:14.450468 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.485579 4687 scope.go:117] "RemoveContainer" containerID="f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.490398 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.499547 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wp4s8"] Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.507077 4687 scope.go:117] "RemoveContainer" containerID="e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.530258 4687 scope.go:117] "RemoveContainer" containerID="42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.600354 4687 scope.go:117] "RemoveContainer" containerID="f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a" Dec 03 18:27:14 crc kubenswrapper[4687]: E1203 18:27:14.601817 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a\": container with ID starting with f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a not found: ID does not exist" containerID="f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.601864 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a"} err="failed to get container status \"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a\": rpc error: code = NotFound desc = could not find container \"f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a\": container with ID starting with f0a177b16cd075ee4c7631d30d53969d32629d26b8ba22048dccb5218cd9835a not found: ID does not exist" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.601892 4687 scope.go:117] "RemoveContainer" containerID="e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36" Dec 03 18:27:14 crc kubenswrapper[4687]: E1203 18:27:14.605273 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36\": container with ID starting with e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36 not found: ID does not exist" containerID="e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.605316 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36"} err="failed to get container status \"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36\": rpc error: code = NotFound desc = could not find container \"e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36\": container with ID starting with e3eadc9698b32bfb71cac9394b0e9b62180b22bed281752a96082f4b46377c36 not found: ID does not exist" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.605340 4687 scope.go:117] "RemoveContainer" containerID="42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf" Dec 03 18:27:14 crc kubenswrapper[4687]: E1203 18:27:14.607429 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf\": container with ID starting with 42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf not found: ID does not exist" containerID="42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf" Dec 03 18:27:14 crc kubenswrapper[4687]: I1203 18:27:14.607503 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf"} err="failed to get container status \"42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf\": rpc error: code = NotFound desc = could not find container \"42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf\": container with ID starting with 42e33cfc35e5ae4ffd8932ff623c9b759799eae9338f016417a7b1f2117b82cf not found: ID does not exist" Dec 03 18:27:15 crc kubenswrapper[4687]: I1203 18:27:15.417227 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" path="/var/lib/kubelet/pods/dc08c72b-3d88-4d20-b611-1a5a4d96a42e/volumes" Dec 03 18:27:21 crc kubenswrapper[4687]: E1203 18:27:21.729171 4687 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.130:60094->38.102.83.130:36177: read tcp 38.102.83.130:60094->38.102.83.130:36177: read: connection reset by peer Dec 03 18:27:26 crc kubenswrapper[4687]: I1203 18:27:26.407102 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:27:26 crc kubenswrapper[4687]: E1203 18:27:26.408006 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.408076 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:27:40 crc kubenswrapper[4687]: E1203 18:27:40.408866 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.499849 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 18:27:40 crc kubenswrapper[4687]: E1203 18:27:40.500368 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="registry-server" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.500391 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="registry-server" Dec 03 18:27:40 crc kubenswrapper[4687]: E1203 18:27:40.500408 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="extract-content" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.500417 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="extract-content" Dec 03 18:27:40 crc kubenswrapper[4687]: E1203 18:27:40.500432 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="extract-utilities" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.500439 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="extract-utilities" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.500645 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc08c72b-3d88-4d20-b611-1a5a4d96a42e" containerName="registry-server" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.501407 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.504919 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.504995 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.505006 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.505292 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ckmlh" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.511501 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636521 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636604 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqfm5\" (UniqueName: \"kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636650 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636708 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636784 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636856 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636890 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.636964 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.637196 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739267 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739522 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739649 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739863 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.739955 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqfm5\" (UniqueName: \"kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740033 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740169 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740277 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740318 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740769 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.740835 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.742086 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.750291 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.751203 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.751595 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.765788 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqfm5\" (UniqueName: \"kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.785891 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " pod="openstack/tempest-tests-tempest" Dec 03 18:27:40 crc kubenswrapper[4687]: I1203 18:27:40.827673 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 18:27:41 crc kubenswrapper[4687]: I1203 18:27:41.254662 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 03 18:27:41 crc kubenswrapper[4687]: I1203 18:27:41.705690 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3c56ab4c-455a-4436-927e-3dba7e4aa0ba","Type":"ContainerStarted","Data":"24f3459adddab09c715eca70496d477c499a7c7821b4061f08cb9639f4b3c2da"} Dec 03 18:27:55 crc kubenswrapper[4687]: I1203 18:27:55.408396 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:27:55 crc kubenswrapper[4687]: E1203 18:27:55.409311 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:28:09 crc kubenswrapper[4687]: I1203 18:28:09.408330 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:28:09 crc kubenswrapper[4687]: E1203 18:28:09.409496 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.283044 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.287229 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.303520 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.437906 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.437966 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.438005 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sr25\" (UniqueName: \"kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.541771 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.541823 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.541848 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sr25\" (UniqueName: \"kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.542484 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.542948 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.564498 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sr25\" (UniqueName: \"kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25\") pod \"redhat-marketplace-kgpfq\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: I1203 18:28:10.629248 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:10 crc kubenswrapper[4687]: E1203 18:28:10.683906 4687 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 03 18:28:10 crc kubenswrapper[4687]: E1203 18:28:10.684174 4687 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqfm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(3c56ab4c-455a-4436-927e-3dba7e4aa0ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 18:28:10 crc kubenswrapper[4687]: E1203 18:28:10.685844 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" Dec 03 18:28:10 crc kubenswrapper[4687]: E1203 18:28:10.991593 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" Dec 03 18:28:11 crc kubenswrapper[4687]: I1203 18:28:11.297323 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:12 crc kubenswrapper[4687]: I1203 18:28:12.004645 4687 generic.go:334] "Generic (PLEG): container finished" podID="e16e3e78-25bc-4118-8a16-d92734607fec" containerID="26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74" exitCode=0 Dec 03 18:28:12 crc kubenswrapper[4687]: I1203 18:28:12.004874 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerDied","Data":"26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74"} Dec 03 18:28:12 crc kubenswrapper[4687]: I1203 18:28:12.004965 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerStarted","Data":"31cfc9a6b4d017c22497c27d47aa392f44a36e3b227c4db902619d979e439e54"} Dec 03 18:28:14 crc kubenswrapper[4687]: I1203 18:28:14.030850 4687 generic.go:334] "Generic (PLEG): container finished" podID="e16e3e78-25bc-4118-8a16-d92734607fec" containerID="cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea" exitCode=0 Dec 03 18:28:14 crc kubenswrapper[4687]: I1203 18:28:14.030937 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerDied","Data":"cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea"} Dec 03 18:28:15 crc kubenswrapper[4687]: I1203 18:28:15.045043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerStarted","Data":"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1"} Dec 03 18:28:15 crc kubenswrapper[4687]: I1203 18:28:15.068501 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kgpfq" podStartSLOduration=2.598337596 podStartE2EDuration="5.068483947s" podCreationTimestamp="2025-12-03 18:28:10 +0000 UTC" firstStartedPulling="2025-12-03 18:28:12.006534478 +0000 UTC m=+2924.897229911" lastFinishedPulling="2025-12-03 18:28:14.476680819 +0000 UTC m=+2927.367376262" observedRunningTime="2025-12-03 18:28:15.06378233 +0000 UTC m=+2927.954477803" watchObservedRunningTime="2025-12-03 18:28:15.068483947 +0000 UTC m=+2927.959179380" Dec 03 18:28:20 crc kubenswrapper[4687]: I1203 18:28:20.408842 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:28:20 crc kubenswrapper[4687]: E1203 18:28:20.409610 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:28:20 crc kubenswrapper[4687]: I1203 18:28:20.630678 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:20 crc kubenswrapper[4687]: I1203 18:28:20.630753 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:20 crc kubenswrapper[4687]: I1203 18:28:20.700700 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:21 crc kubenswrapper[4687]: I1203 18:28:21.189854 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:21 crc kubenswrapper[4687]: I1203 18:28:21.244031 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.144010 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kgpfq" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="registry-server" containerID="cri-o://1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1" gracePeriod=2 Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.609918 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.707079 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sr25\" (UniqueName: \"kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25\") pod \"e16e3e78-25bc-4118-8a16-d92734607fec\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.707256 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content\") pod \"e16e3e78-25bc-4118-8a16-d92734607fec\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.707427 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities\") pod \"e16e3e78-25bc-4118-8a16-d92734607fec\" (UID: \"e16e3e78-25bc-4118-8a16-d92734607fec\") " Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.708841 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities" (OuterVolumeSpecName: "utilities") pod "e16e3e78-25bc-4118-8a16-d92734607fec" (UID: "e16e3e78-25bc-4118-8a16-d92734607fec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.714489 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25" (OuterVolumeSpecName: "kube-api-access-5sr25") pod "e16e3e78-25bc-4118-8a16-d92734607fec" (UID: "e16e3e78-25bc-4118-8a16-d92734607fec"). InnerVolumeSpecName "kube-api-access-5sr25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.742504 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e16e3e78-25bc-4118-8a16-d92734607fec" (UID: "e16e3e78-25bc-4118-8a16-d92734607fec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.809180 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.809209 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sr25\" (UniqueName: \"kubernetes.io/projected/e16e3e78-25bc-4118-8a16-d92734607fec-kube-api-access-5sr25\") on node \"crc\" DevicePath \"\"" Dec 03 18:28:23 crc kubenswrapper[4687]: I1203 18:28:23.809221 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16e3e78-25bc-4118-8a16-d92734607fec-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.154095 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3c56ab4c-455a-4436-927e-3dba7e4aa0ba","Type":"ContainerStarted","Data":"c377f7ebca8b9d7cfd054a8f51990a2e176acdb10756da0c6c1cf1e448ffa83f"} Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.156700 4687 generic.go:334] "Generic (PLEG): container finished" podID="e16e3e78-25bc-4118-8a16-d92734607fec" containerID="1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1" exitCode=0 Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.156764 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerDied","Data":"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1"} Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.156814 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kgpfq" event={"ID":"e16e3e78-25bc-4118-8a16-d92734607fec","Type":"ContainerDied","Data":"31cfc9a6b4d017c22497c27d47aa392f44a36e3b227c4db902619d979e439e54"} Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.156801 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kgpfq" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.156838 4687 scope.go:117] "RemoveContainer" containerID="1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.177040 4687 scope.go:117] "RemoveContainer" containerID="cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.193714 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.480122354 podStartE2EDuration="45.193683554s" podCreationTimestamp="2025-12-03 18:27:39 +0000 UTC" firstStartedPulling="2025-12-03 18:27:41.262096809 +0000 UTC m=+2894.152792242" lastFinishedPulling="2025-12-03 18:28:22.975658009 +0000 UTC m=+2935.866353442" observedRunningTime="2025-12-03 18:28:24.186799089 +0000 UTC m=+2937.077494522" watchObservedRunningTime="2025-12-03 18:28:24.193683554 +0000 UTC m=+2937.084378987" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.224653 4687 scope.go:117] "RemoveContainer" containerID="26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.246847 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.257500 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kgpfq"] Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.283176 4687 scope.go:117] "RemoveContainer" containerID="1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1" Dec 03 18:28:24 crc kubenswrapper[4687]: E1203 18:28:24.283945 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1\": container with ID starting with 1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1 not found: ID does not exist" containerID="1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.283981 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1"} err="failed to get container status \"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1\": rpc error: code = NotFound desc = could not find container \"1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1\": container with ID starting with 1ac590da8e6b4f7a5a597329ceff2f5d39620d3148a93e3451dfe9df6d9475b1 not found: ID does not exist" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.284031 4687 scope.go:117] "RemoveContainer" containerID="cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea" Dec 03 18:28:24 crc kubenswrapper[4687]: E1203 18:28:24.284281 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea\": container with ID starting with cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea not found: ID does not exist" containerID="cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.284313 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea"} err="failed to get container status \"cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea\": rpc error: code = NotFound desc = could not find container \"cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea\": container with ID starting with cad0e5dabdae6d592449ada958301a60c814fc8a4011a0d6d0ed48dd4bee95ea not found: ID does not exist" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.284332 4687 scope.go:117] "RemoveContainer" containerID="26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74" Dec 03 18:28:24 crc kubenswrapper[4687]: E1203 18:28:24.284590 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74\": container with ID starting with 26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74 not found: ID does not exist" containerID="26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74" Dec 03 18:28:24 crc kubenswrapper[4687]: I1203 18:28:24.284616 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74"} err="failed to get container status \"26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74\": rpc error: code = NotFound desc = could not find container \"26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74\": container with ID starting with 26c784a96ac1024445efcc4958716c30cf01d76da2f379aef44a9e1194713b74 not found: ID does not exist" Dec 03 18:28:25 crc kubenswrapper[4687]: I1203 18:28:25.427596 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" path="/var/lib/kubelet/pods/e16e3e78-25bc-4118-8a16-d92734607fec/volumes" Dec 03 18:28:32 crc kubenswrapper[4687]: I1203 18:28:32.407890 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:28:32 crc kubenswrapper[4687]: E1203 18:28:32.408977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:28:43 crc kubenswrapper[4687]: I1203 18:28:43.408557 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:28:43 crc kubenswrapper[4687]: E1203 18:28:43.409285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:28:57 crc kubenswrapper[4687]: I1203 18:28:57.412911 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:28:57 crc kubenswrapper[4687]: E1203 18:28:57.413699 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:29:09 crc kubenswrapper[4687]: I1203 18:29:09.408034 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:29:09 crc kubenswrapper[4687]: E1203 18:29:09.409244 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:29:23 crc kubenswrapper[4687]: I1203 18:29:23.408692 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:29:23 crc kubenswrapper[4687]: E1203 18:29:23.412464 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:29:38 crc kubenswrapper[4687]: I1203 18:29:38.406894 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:29:38 crc kubenswrapper[4687]: E1203 18:29:38.407582 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:29:52 crc kubenswrapper[4687]: I1203 18:29:52.408396 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:29:52 crc kubenswrapper[4687]: E1203 18:29:52.409375 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.143608 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7"] Dec 03 18:30:00 crc kubenswrapper[4687]: E1203 18:30:00.144455 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="registry-server" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.144471 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="registry-server" Dec 03 18:30:00 crc kubenswrapper[4687]: E1203 18:30:00.144487 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="extract-content" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.144493 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="extract-content" Dec 03 18:30:00 crc kubenswrapper[4687]: E1203 18:30:00.144509 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="extract-utilities" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.144516 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="extract-utilities" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.144726 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16e3e78-25bc-4118-8a16-d92734607fec" containerName="registry-server" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.145450 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.147166 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.147502 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.158214 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7"] Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.278030 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.278164 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hfzc\" (UniqueName: \"kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.278202 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.380035 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.380209 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.380297 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hfzc\" (UniqueName: \"kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.381016 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.385511 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.397939 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hfzc\" (UniqueName: \"kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc\") pod \"collect-profiles-29413110-f5wk7\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.465272 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:00 crc kubenswrapper[4687]: I1203 18:30:00.930708 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7"] Dec 03 18:30:01 crc kubenswrapper[4687]: I1203 18:30:01.176898 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" event={"ID":"81ee75fd-10b7-4a73-877f-9265f59afe3f","Type":"ContainerStarted","Data":"b16cbdcd38d1499d1153b6a3a9fa1b81a42dbbcf7a6f133ac9dd09e4caad9abb"} Dec 03 18:30:02 crc kubenswrapper[4687]: I1203 18:30:02.185911 4687 generic.go:334] "Generic (PLEG): container finished" podID="81ee75fd-10b7-4a73-877f-9265f59afe3f" containerID="f0898783386fd454081b51f406741da2af6039baaec63d47f7615055b06f260e" exitCode=0 Dec 03 18:30:02 crc kubenswrapper[4687]: I1203 18:30:02.185976 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" event={"ID":"81ee75fd-10b7-4a73-877f-9265f59afe3f","Type":"ContainerDied","Data":"f0898783386fd454081b51f406741da2af6039baaec63d47f7615055b06f260e"} Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.550436 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.741579 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume\") pod \"81ee75fd-10b7-4a73-877f-9265f59afe3f\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.741701 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume\") pod \"81ee75fd-10b7-4a73-877f-9265f59afe3f\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.741784 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hfzc\" (UniqueName: \"kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc\") pod \"81ee75fd-10b7-4a73-877f-9265f59afe3f\" (UID: \"81ee75fd-10b7-4a73-877f-9265f59afe3f\") " Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.742291 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume" (OuterVolumeSpecName: "config-volume") pod "81ee75fd-10b7-4a73-877f-9265f59afe3f" (UID: "81ee75fd-10b7-4a73-877f-9265f59afe3f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.747779 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc" (OuterVolumeSpecName: "kube-api-access-9hfzc") pod "81ee75fd-10b7-4a73-877f-9265f59afe3f" (UID: "81ee75fd-10b7-4a73-877f-9265f59afe3f"). InnerVolumeSpecName "kube-api-access-9hfzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.749389 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81ee75fd-10b7-4a73-877f-9265f59afe3f" (UID: "81ee75fd-10b7-4a73-877f-9265f59afe3f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.843904 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81ee75fd-10b7-4a73-877f-9265f59afe3f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.843958 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee75fd-10b7-4a73-877f-9265f59afe3f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:30:03 crc kubenswrapper[4687]: I1203 18:30:03.843969 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hfzc\" (UniqueName: \"kubernetes.io/projected/81ee75fd-10b7-4a73-877f-9265f59afe3f-kube-api-access-9hfzc\") on node \"crc\" DevicePath \"\"" Dec 03 18:30:04 crc kubenswrapper[4687]: I1203 18:30:04.204899 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" event={"ID":"81ee75fd-10b7-4a73-877f-9265f59afe3f","Type":"ContainerDied","Data":"b16cbdcd38d1499d1153b6a3a9fa1b81a42dbbcf7a6f133ac9dd09e4caad9abb"} Dec 03 18:30:04 crc kubenswrapper[4687]: I1203 18:30:04.205228 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b16cbdcd38d1499d1153b6a3a9fa1b81a42dbbcf7a6f133ac9dd09e4caad9abb" Dec 03 18:30:04 crc kubenswrapper[4687]: I1203 18:30:04.204951 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413110-f5wk7" Dec 03 18:30:04 crc kubenswrapper[4687]: I1203 18:30:04.677799 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5"] Dec 03 18:30:04 crc kubenswrapper[4687]: I1203 18:30:04.689375 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413065-fnzv5"] Dec 03 18:30:05 crc kubenswrapper[4687]: I1203 18:30:05.418162 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b757c215-9461-4e39-bbd9-aa74875edd28" path="/var/lib/kubelet/pods/b757c215-9461-4e39-bbd9-aa74875edd28/volumes" Dec 03 18:30:06 crc kubenswrapper[4687]: I1203 18:30:06.407639 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:30:06 crc kubenswrapper[4687]: E1203 18:30:06.408250 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:30:17 crc kubenswrapper[4687]: I1203 18:30:17.412971 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:30:17 crc kubenswrapper[4687]: E1203 18:30:17.413790 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:30:30 crc kubenswrapper[4687]: I1203 18:30:30.407000 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:30:30 crc kubenswrapper[4687]: E1203 18:30:30.407975 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:30:41 crc kubenswrapper[4687]: I1203 18:30:41.466382 4687 scope.go:117] "RemoveContainer" containerID="8cf72dcb64bf9cae4bfd608bab3a7dfa6fefcaf9118ab63c72b23880bd46883a" Dec 03 18:30:42 crc kubenswrapper[4687]: I1203 18:30:42.408226 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:30:42 crc kubenswrapper[4687]: E1203 18:30:42.409231 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:30:54 crc kubenswrapper[4687]: I1203 18:30:54.407548 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:30:54 crc kubenswrapper[4687]: E1203 18:30:54.408285 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:31:08 crc kubenswrapper[4687]: I1203 18:31:08.407530 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:31:08 crc kubenswrapper[4687]: E1203 18:31:08.408398 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:31:22 crc kubenswrapper[4687]: I1203 18:31:22.407995 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:31:22 crc kubenswrapper[4687]: E1203 18:31:22.408869 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:31:36 crc kubenswrapper[4687]: I1203 18:31:36.407225 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:31:36 crc kubenswrapper[4687]: E1203 18:31:36.407954 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:31:49 crc kubenswrapper[4687]: I1203 18:31:49.407375 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:31:49 crc kubenswrapper[4687]: E1203 18:31:49.408349 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:32:00 crc kubenswrapper[4687]: I1203 18:32:00.407400 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:32:00 crc kubenswrapper[4687]: E1203 18:32:00.408156 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:32:13 crc kubenswrapper[4687]: I1203 18:32:13.407745 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:32:13 crc kubenswrapper[4687]: E1203 18:32:13.408715 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:32:25 crc kubenswrapper[4687]: I1203 18:32:25.408320 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:32:26 crc kubenswrapper[4687]: I1203 18:32:26.527213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a"} Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.566587 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vdqwc"] Dec 03 18:33:58 crc kubenswrapper[4687]: E1203 18:33:58.567445 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ee75fd-10b7-4a73-877f-9265f59afe3f" containerName="collect-profiles" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.567458 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ee75fd-10b7-4a73-877f-9265f59afe3f" containerName="collect-profiles" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.567644 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ee75fd-10b7-4a73-877f-9265f59afe3f" containerName="collect-profiles" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.569141 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.579324 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdqwc"] Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.729762 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zmhv\" (UniqueName: \"kubernetes.io/projected/d8c2c83b-47e6-4b42-a034-ba86180d732c-kube-api-access-9zmhv\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.729850 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-catalog-content\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.729898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-utilities\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.833055 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zmhv\" (UniqueName: \"kubernetes.io/projected/d8c2c83b-47e6-4b42-a034-ba86180d732c-kube-api-access-9zmhv\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.833217 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-catalog-content\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.833289 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-utilities\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.833779 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-catalog-content\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.833851 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c2c83b-47e6-4b42-a034-ba86180d732c-utilities\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.851283 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zmhv\" (UniqueName: \"kubernetes.io/projected/d8c2c83b-47e6-4b42-a034-ba86180d732c-kube-api-access-9zmhv\") pod \"redhat-operators-vdqwc\" (UID: \"d8c2c83b-47e6-4b42-a034-ba86180d732c\") " pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:58 crc kubenswrapper[4687]: I1203 18:33:58.898712 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:33:59 crc kubenswrapper[4687]: I1203 18:33:59.403910 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdqwc"] Dec 03 18:33:59 crc kubenswrapper[4687]: I1203 18:33:59.421553 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdqwc" event={"ID":"d8c2c83b-47e6-4b42-a034-ba86180d732c","Type":"ContainerStarted","Data":"aae80d74da08f2d0cc3b233a807e52cc09cbed9c5383794092eb5f5900998b17"} Dec 03 18:34:00 crc kubenswrapper[4687]: I1203 18:34:00.432674 4687 generic.go:334] "Generic (PLEG): container finished" podID="d8c2c83b-47e6-4b42-a034-ba86180d732c" containerID="d969664d0db1ae6aae2c44902c2f97ca4000db1d5946e1eb7077ab1f24677044" exitCode=0 Dec 03 18:34:00 crc kubenswrapper[4687]: I1203 18:34:00.432778 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdqwc" event={"ID":"d8c2c83b-47e6-4b42-a034-ba86180d732c","Type":"ContainerDied","Data":"d969664d0db1ae6aae2c44902c2f97ca4000db1d5946e1eb7077ab1f24677044"} Dec 03 18:34:00 crc kubenswrapper[4687]: I1203 18:34:00.437701 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:34:07 crc kubenswrapper[4687]: I1203 18:34:07.505169 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdqwc" event={"ID":"d8c2c83b-47e6-4b42-a034-ba86180d732c","Type":"ContainerStarted","Data":"0e1dfe7ead82e8ee10655a5501a40c58ebc0137100fe6672b6fd0547f77826c6"} Dec 03 18:34:09 crc kubenswrapper[4687]: I1203 18:34:09.521309 4687 generic.go:334] "Generic (PLEG): container finished" podID="d8c2c83b-47e6-4b42-a034-ba86180d732c" containerID="0e1dfe7ead82e8ee10655a5501a40c58ebc0137100fe6672b6fd0547f77826c6" exitCode=0 Dec 03 18:34:09 crc kubenswrapper[4687]: I1203 18:34:09.521578 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdqwc" event={"ID":"d8c2c83b-47e6-4b42-a034-ba86180d732c","Type":"ContainerDied","Data":"0e1dfe7ead82e8ee10655a5501a40c58ebc0137100fe6672b6fd0547f77826c6"} Dec 03 18:34:12 crc kubenswrapper[4687]: I1203 18:34:12.579817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vdqwc" event={"ID":"d8c2c83b-47e6-4b42-a034-ba86180d732c","Type":"ContainerStarted","Data":"d84fb2172b95b2169869222459f5eefc943f6593fe64302d4f363d53c91c98b4"} Dec 03 18:34:12 crc kubenswrapper[4687]: I1203 18:34:12.615179 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vdqwc" podStartSLOduration=3.087950379 podStartE2EDuration="14.615155368s" podCreationTimestamp="2025-12-03 18:33:58 +0000 UTC" firstStartedPulling="2025-12-03 18:34:00.437373147 +0000 UTC m=+3273.328068580" lastFinishedPulling="2025-12-03 18:34:11.964578136 +0000 UTC m=+3284.855273569" observedRunningTime="2025-12-03 18:34:12.603541935 +0000 UTC m=+3285.494237378" watchObservedRunningTime="2025-12-03 18:34:12.615155368 +0000 UTC m=+3285.505850801" Dec 03 18:34:18 crc kubenswrapper[4687]: I1203 18:34:18.899500 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:34:18 crc kubenswrapper[4687]: I1203 18:34:18.900031 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:34:18 crc kubenswrapper[4687]: I1203 18:34:18.981053 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:34:19 crc kubenswrapper[4687]: I1203 18:34:19.685364 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vdqwc" Dec 03 18:34:19 crc kubenswrapper[4687]: I1203 18:34:19.765682 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vdqwc"] Dec 03 18:34:19 crc kubenswrapper[4687]: I1203 18:34:19.810569 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 18:34:19 crc kubenswrapper[4687]: I1203 18:34:19.811064 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-58svh" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="registry-server" containerID="cri-o://b62de15eab5ca57fb337d0bf438dbc1d74701733d9c6e58e4772d17b23b55ebf" gracePeriod=2 Dec 03 18:34:21 crc kubenswrapper[4687]: I1203 18:34:21.661790 4687 generic.go:334] "Generic (PLEG): container finished" podID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerID="b62de15eab5ca57fb337d0bf438dbc1d74701733d9c6e58e4772d17b23b55ebf" exitCode=0 Dec 03 18:34:21 crc kubenswrapper[4687]: I1203 18:34:21.662166 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerDied","Data":"b62de15eab5ca57fb337d0bf438dbc1d74701733d9c6e58e4772d17b23b55ebf"} Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.204257 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.311770 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities\") pod \"e4fc20e2-ab37-41f6-973f-992fcb3de184\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.312215 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf77q\" (UniqueName: \"kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q\") pod \"e4fc20e2-ab37-41f6-973f-992fcb3de184\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.312319 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content\") pod \"e4fc20e2-ab37-41f6-973f-992fcb3de184\" (UID: \"e4fc20e2-ab37-41f6-973f-992fcb3de184\") " Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.314700 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities" (OuterVolumeSpecName: "utilities") pod "e4fc20e2-ab37-41f6-973f-992fcb3de184" (UID: "e4fc20e2-ab37-41f6-973f-992fcb3de184"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.341901 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q" (OuterVolumeSpecName: "kube-api-access-mf77q") pod "e4fc20e2-ab37-41f6-973f-992fcb3de184" (UID: "e4fc20e2-ab37-41f6-973f-992fcb3de184"). InnerVolumeSpecName "kube-api-access-mf77q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.414768 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.414807 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf77q\" (UniqueName: \"kubernetes.io/projected/e4fc20e2-ab37-41f6-973f-992fcb3de184-kube-api-access-mf77q\") on node \"crc\" DevicePath \"\"" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.454968 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4fc20e2-ab37-41f6-973f-992fcb3de184" (UID: "e4fc20e2-ab37-41f6-973f-992fcb3de184"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.517090 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4fc20e2-ab37-41f6-973f-992fcb3de184-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.673250 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-58svh" event={"ID":"e4fc20e2-ab37-41f6-973f-992fcb3de184","Type":"ContainerDied","Data":"012b86c2e8420e0b6b3f48e26209aca9fd46944d2390c8dbbdcfbc3e618ddf92"} Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.674096 4687 scope.go:117] "RemoveContainer" containerID="b62de15eab5ca57fb337d0bf438dbc1d74701733d9c6e58e4772d17b23b55ebf" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.673302 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-58svh" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.704440 4687 scope.go:117] "RemoveContainer" containerID="42d91f532a9fd1a6eedddc8f4271757923fa606ba5b8cb36e421043572557f4b" Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.721648 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.732424 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-58svh"] Dec 03 18:34:22 crc kubenswrapper[4687]: I1203 18:34:22.744380 4687 scope.go:117] "RemoveContainer" containerID="c93226976f0f44cd73f99a5cb082ef00716720ede3a8c748e462afca2ffe7382" Dec 03 18:34:23 crc kubenswrapper[4687]: I1203 18:34:23.419048 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" path="/var/lib/kubelet/pods/e4fc20e2-ab37-41f6-973f-992fcb3de184/volumes" Dec 03 18:34:44 crc kubenswrapper[4687]: I1203 18:34:44.111507 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:34:44 crc kubenswrapper[4687]: I1203 18:34:44.112271 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:35:14 crc kubenswrapper[4687]: I1203 18:35:14.111802 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:35:14 crc kubenswrapper[4687]: I1203 18:35:14.113511 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.789808 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:18 crc kubenswrapper[4687]: E1203 18:35:18.792213 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="extract-content" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.792348 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="extract-content" Dec 03 18:35:18 crc kubenswrapper[4687]: E1203 18:35:18.792467 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="extract-utilities" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.792559 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="extract-utilities" Dec 03 18:35:18 crc kubenswrapper[4687]: E1203 18:35:18.792656 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="registry-server" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.792746 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="registry-server" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.793078 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4fc20e2-ab37-41f6-973f-992fcb3de184" containerName="registry-server" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.795284 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.815845 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.934098 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b429p\" (UniqueName: \"kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.934187 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:18 crc kubenswrapper[4687]: I1203 18:35:18.934354 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.036010 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b429p\" (UniqueName: \"kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.036092 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.036379 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.036689 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.036849 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.065582 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b429p\" (UniqueName: \"kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p\") pod \"community-operators-plpjx\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.119949 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:19 crc kubenswrapper[4687]: I1203 18:35:19.617844 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:20 crc kubenswrapper[4687]: I1203 18:35:20.209059 4687 generic.go:334] "Generic (PLEG): container finished" podID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerID="bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b" exitCode=0 Dec 03 18:35:20 crc kubenswrapper[4687]: I1203 18:35:20.209108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerDied","Data":"bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b"} Dec 03 18:35:20 crc kubenswrapper[4687]: I1203 18:35:20.209160 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerStarted","Data":"5d8cff792c07d997e9d54ce763388f22bba3ff5cd58366f2c8c66e195ecce65c"} Dec 03 18:35:21 crc kubenswrapper[4687]: I1203 18:35:21.218939 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerStarted","Data":"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c"} Dec 03 18:35:22 crc kubenswrapper[4687]: I1203 18:35:22.229691 4687 generic.go:334] "Generic (PLEG): container finished" podID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerID="a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c" exitCode=0 Dec 03 18:35:22 crc kubenswrapper[4687]: I1203 18:35:22.229746 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerDied","Data":"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c"} Dec 03 18:35:25 crc kubenswrapper[4687]: I1203 18:35:25.257747 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerStarted","Data":"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f"} Dec 03 18:35:25 crc kubenswrapper[4687]: I1203 18:35:25.288683 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-plpjx" podStartSLOduration=3.701627734 podStartE2EDuration="7.288657881s" podCreationTimestamp="2025-12-03 18:35:18 +0000 UTC" firstStartedPulling="2025-12-03 18:35:20.211375858 +0000 UTC m=+3353.102071291" lastFinishedPulling="2025-12-03 18:35:23.798406005 +0000 UTC m=+3356.689101438" observedRunningTime="2025-12-03 18:35:25.283772939 +0000 UTC m=+3358.174468382" watchObservedRunningTime="2025-12-03 18:35:25.288657881 +0000 UTC m=+3358.179353324" Dec 03 18:35:29 crc kubenswrapper[4687]: I1203 18:35:29.120235 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:29 crc kubenswrapper[4687]: I1203 18:35:29.120896 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:29 crc kubenswrapper[4687]: I1203 18:35:29.185386 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:29 crc kubenswrapper[4687]: I1203 18:35:29.356402 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:29 crc kubenswrapper[4687]: I1203 18:35:29.443214 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:31 crc kubenswrapper[4687]: I1203 18:35:31.323906 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-plpjx" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="registry-server" containerID="cri-o://d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f" gracePeriod=2 Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.672194 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.707354 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b429p\" (UniqueName: \"kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p\") pod \"65863475-b2e2-4608-9f5e-45b0de7b23a4\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.707437 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content\") pod \"65863475-b2e2-4608-9f5e-45b0de7b23a4\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.707481 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities\") pod \"65863475-b2e2-4608-9f5e-45b0de7b23a4\" (UID: \"65863475-b2e2-4608-9f5e-45b0de7b23a4\") " Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.709022 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities" (OuterVolumeSpecName: "utilities") pod "65863475-b2e2-4608-9f5e-45b0de7b23a4" (UID: "65863475-b2e2-4608-9f5e-45b0de7b23a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.723567 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p" (OuterVolumeSpecName: "kube-api-access-b429p") pod "65863475-b2e2-4608-9f5e-45b0de7b23a4" (UID: "65863475-b2e2-4608-9f5e-45b0de7b23a4"). InnerVolumeSpecName "kube-api-access-b429p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.756264 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65863475-b2e2-4608-9f5e-45b0de7b23a4" (UID: "65863475-b2e2-4608-9f5e-45b0de7b23a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.758686 4687 generic.go:334] "Generic (PLEG): container finished" podID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerID="d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f" exitCode=0 Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.758726 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerDied","Data":"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f"} Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.758753 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-plpjx" event={"ID":"65863475-b2e2-4608-9f5e-45b0de7b23a4","Type":"ContainerDied","Data":"5d8cff792c07d997e9d54ce763388f22bba3ff5cd58366f2c8c66e195ecce65c"} Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.758772 4687 scope.go:117] "RemoveContainer" containerID="d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.758886 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-plpjx" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.788470 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.793870 4687 scope.go:117] "RemoveContainer" containerID="a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.798733 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-plpjx"] Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.809889 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b429p\" (UniqueName: \"kubernetes.io/projected/65863475-b2e2-4608-9f5e-45b0de7b23a4-kube-api-access-b429p\") on node \"crc\" DevicePath \"\"" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.809921 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.809931 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65863475-b2e2-4608-9f5e-45b0de7b23a4-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.834526 4687 scope.go:117] "RemoveContainer" containerID="bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.871680 4687 scope.go:117] "RemoveContainer" containerID="d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f" Dec 03 18:35:34 crc kubenswrapper[4687]: E1203 18:35:34.872164 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f\": container with ID starting with d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f not found: ID does not exist" containerID="d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.872192 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f"} err="failed to get container status \"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f\": rpc error: code = NotFound desc = could not find container \"d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f\": container with ID starting with d31168b65d641d61946b9b5b5a6dbce8ed5e6d7cc82b65c5decae6d56e51103f not found: ID does not exist" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.872303 4687 scope.go:117] "RemoveContainer" containerID="a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c" Dec 03 18:35:34 crc kubenswrapper[4687]: E1203 18:35:34.880405 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c\": container with ID starting with a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c not found: ID does not exist" containerID="a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.880457 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c"} err="failed to get container status \"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c\": rpc error: code = NotFound desc = could not find container \"a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c\": container with ID starting with a08db73feb377c970473594fdc1bfd434a05584d56110704fa273a33ab09384c not found: ID does not exist" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.880525 4687 scope.go:117] "RemoveContainer" containerID="bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b" Dec 03 18:35:34 crc kubenswrapper[4687]: E1203 18:35:34.881335 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b\": container with ID starting with bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b not found: ID does not exist" containerID="bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b" Dec 03 18:35:34 crc kubenswrapper[4687]: I1203 18:35:34.881360 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b"} err="failed to get container status \"bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b\": rpc error: code = NotFound desc = could not find container \"bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b\": container with ID starting with bf92ff9dd777d33049c50e15e8f6523ed3f1ebd86abd0e94d9e43c20bbab237b not found: ID does not exist" Dec 03 18:35:35 crc kubenswrapper[4687]: I1203 18:35:35.419100 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" path="/var/lib/kubelet/pods/65863475-b2e2-4608-9f5e-45b0de7b23a4/volumes" Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.111539 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.112689 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.112831 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.114501 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.114618 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a" gracePeriod=600 Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.841028 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a" exitCode=0 Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.841080 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a"} Dec 03 18:35:44 crc kubenswrapper[4687]: I1203 18:35:44.841643 4687 scope.go:117] "RemoveContainer" containerID="a042760174b4df2d99e76709c6142522b832fb80672fd5e699d8c7de87d68d91" Dec 03 18:35:46 crc kubenswrapper[4687]: I1203 18:35:46.862409 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8"} Dec 03 18:36:23 crc kubenswrapper[4687]: I1203 18:36:23.591520 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="05480209-7592-4ddf-a2d9-f06d4dce2c75" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.169:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.187514 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:10 crc kubenswrapper[4687]: E1203 18:37:10.188532 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="extract-content" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.188551 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="extract-content" Dec 03 18:37:10 crc kubenswrapper[4687]: E1203 18:37:10.188596 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="extract-utilities" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.188605 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="extract-utilities" Dec 03 18:37:10 crc kubenswrapper[4687]: E1203 18:37:10.188628 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="registry-server" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.188637 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="registry-server" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.188865 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="65863475-b2e2-4608-9f5e-45b0de7b23a4" containerName="registry-server" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.190576 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.201551 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.281945 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4977\" (UniqueName: \"kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.282090 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.282165 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.385064 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.385142 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.385306 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4977\" (UniqueName: \"kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.385736 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.386160 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.421905 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4977\" (UniqueName: \"kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977\") pod \"certified-operators-vl5q5\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:10 crc kubenswrapper[4687]: I1203 18:37:10.514377 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:11 crc kubenswrapper[4687]: I1203 18:37:11.031732 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:11 crc kubenswrapper[4687]: I1203 18:37:11.722946 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerID="a2f0f36601036eab0df9f4c39b8459e3217cc136b65f16e546ae8d7d723adfa4" exitCode=0 Dec 03 18:37:11 crc kubenswrapper[4687]: I1203 18:37:11.723043 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerDied","Data":"a2f0f36601036eab0df9f4c39b8459e3217cc136b65f16e546ae8d7d723adfa4"} Dec 03 18:37:11 crc kubenswrapper[4687]: I1203 18:37:11.723287 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerStarted","Data":"644b764bb21a3d145c2867176a071bb1fa95539dc00ee03fda496bc60ab1c90c"} Dec 03 18:37:12 crc kubenswrapper[4687]: I1203 18:37:12.733675 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerStarted","Data":"6fa45274f8e1df5ffb75d875b5a2725302cede29ae7ff62f359adeade4e0f27c"} Dec 03 18:37:13 crc kubenswrapper[4687]: I1203 18:37:13.745367 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerID="6fa45274f8e1df5ffb75d875b5a2725302cede29ae7ff62f359adeade4e0f27c" exitCode=0 Dec 03 18:37:13 crc kubenswrapper[4687]: I1203 18:37:13.745426 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerDied","Data":"6fa45274f8e1df5ffb75d875b5a2725302cede29ae7ff62f359adeade4e0f27c"} Dec 03 18:37:14 crc kubenswrapper[4687]: I1203 18:37:14.758389 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerStarted","Data":"e3d11d28fdb6cefa4906b8fca51bcd84053c3e2dd969b0493c00c17c2f5ac01e"} Dec 03 18:37:14 crc kubenswrapper[4687]: I1203 18:37:14.781021 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vl5q5" podStartSLOduration=2.295647134 podStartE2EDuration="4.78100283s" podCreationTimestamp="2025-12-03 18:37:10 +0000 UTC" firstStartedPulling="2025-12-03 18:37:11.72532597 +0000 UTC m=+3464.616021413" lastFinishedPulling="2025-12-03 18:37:14.210681656 +0000 UTC m=+3467.101377109" observedRunningTime="2025-12-03 18:37:14.778321707 +0000 UTC m=+3467.669017150" watchObservedRunningTime="2025-12-03 18:37:14.78100283 +0000 UTC m=+3467.671698263" Dec 03 18:37:20 crc kubenswrapper[4687]: I1203 18:37:20.514773 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:20 crc kubenswrapper[4687]: I1203 18:37:20.515164 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:20 crc kubenswrapper[4687]: I1203 18:37:20.560740 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:20 crc kubenswrapper[4687]: I1203 18:37:20.881988 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:20 crc kubenswrapper[4687]: I1203 18:37:20.935844 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:22 crc kubenswrapper[4687]: I1203 18:37:22.839824 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vl5q5" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="registry-server" containerID="cri-o://e3d11d28fdb6cefa4906b8fca51bcd84053c3e2dd969b0493c00c17c2f5ac01e" gracePeriod=2 Dec 03 18:37:23 crc kubenswrapper[4687]: I1203 18:37:23.852737 4687 generic.go:334] "Generic (PLEG): container finished" podID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerID="e3d11d28fdb6cefa4906b8fca51bcd84053c3e2dd969b0493c00c17c2f5ac01e" exitCode=0 Dec 03 18:37:23 crc kubenswrapper[4687]: I1203 18:37:23.852871 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerDied","Data":"e3d11d28fdb6cefa4906b8fca51bcd84053c3e2dd969b0493c00c17c2f5ac01e"} Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.491262 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.596412 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4977\" (UniqueName: \"kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977\") pod \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.596542 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities\") pod \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.596604 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content\") pod \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\" (UID: \"cd51ccb8-1316-4f7c-ae6b-5a329d47e756\") " Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.598200 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities" (OuterVolumeSpecName: "utilities") pod "cd51ccb8-1316-4f7c-ae6b-5a329d47e756" (UID: "cd51ccb8-1316-4f7c-ae6b-5a329d47e756"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.603880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977" (OuterVolumeSpecName: "kube-api-access-d4977") pod "cd51ccb8-1316-4f7c-ae6b-5a329d47e756" (UID: "cd51ccb8-1316-4f7c-ae6b-5a329d47e756"). InnerVolumeSpecName "kube-api-access-d4977". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.645949 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd51ccb8-1316-4f7c-ae6b-5a329d47e756" (UID: "cd51ccb8-1316-4f7c-ae6b-5a329d47e756"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.698349 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.698600 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4977\" (UniqueName: \"kubernetes.io/projected/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-kube-api-access-d4977\") on node \"crc\" DevicePath \"\"" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.698673 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd51ccb8-1316-4f7c-ae6b-5a329d47e756-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.863634 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vl5q5" event={"ID":"cd51ccb8-1316-4f7c-ae6b-5a329d47e756","Type":"ContainerDied","Data":"644b764bb21a3d145c2867176a071bb1fa95539dc00ee03fda496bc60ab1c90c"} Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.863670 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vl5q5" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.863713 4687 scope.go:117] "RemoveContainer" containerID="e3d11d28fdb6cefa4906b8fca51bcd84053c3e2dd969b0493c00c17c2f5ac01e" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.898600 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.903890 4687 scope.go:117] "RemoveContainer" containerID="6fa45274f8e1df5ffb75d875b5a2725302cede29ae7ff62f359adeade4e0f27c" Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.909791 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vl5q5"] Dec 03 18:37:24 crc kubenswrapper[4687]: I1203 18:37:24.935290 4687 scope.go:117] "RemoveContainer" containerID="a2f0f36601036eab0df9f4c39b8459e3217cc136b65f16e546ae8d7d723adfa4" Dec 03 18:37:25 crc kubenswrapper[4687]: I1203 18:37:25.419488 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" path="/var/lib/kubelet/pods/cd51ccb8-1316-4f7c-ae6b-5a329d47e756/volumes" Dec 03 18:38:14 crc kubenswrapper[4687]: I1203 18:38:14.111960 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:38:14 crc kubenswrapper[4687]: I1203 18:38:14.112711 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:38:44 crc kubenswrapper[4687]: I1203 18:38:44.111255 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:38:44 crc kubenswrapper[4687]: I1203 18:38:44.111796 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.608714 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:38:59 crc kubenswrapper[4687]: E1203 18:38:59.609633 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="registry-server" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.609648 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="registry-server" Dec 03 18:38:59 crc kubenswrapper[4687]: E1203 18:38:59.609662 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="extract-utilities" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.609668 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="extract-utilities" Dec 03 18:38:59 crc kubenswrapper[4687]: E1203 18:38:59.609694 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="extract-content" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.609700 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="extract-content" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.609863 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd51ccb8-1316-4f7c-ae6b-5a329d47e756" containerName="registry-server" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.611245 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.623327 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.683230 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdrv\" (UniqueName: \"kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.683330 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.683503 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.785750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.785871 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.785939 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdrv\" (UniqueName: \"kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.786341 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.786413 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.807332 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdrv\" (UniqueName: \"kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv\") pod \"redhat-marketplace-76mfq\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:38:59 crc kubenswrapper[4687]: I1203 18:38:59.931219 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:00 crc kubenswrapper[4687]: I1203 18:39:00.462642 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:39:00 crc kubenswrapper[4687]: I1203 18:39:00.759057 4687 generic.go:334] "Generic (PLEG): container finished" podID="233d95cc-bded-464d-adf9-b306f35753c5" containerID="d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1" exitCode=0 Dec 03 18:39:00 crc kubenswrapper[4687]: I1203 18:39:00.759108 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerDied","Data":"d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1"} Dec 03 18:39:00 crc kubenswrapper[4687]: I1203 18:39:00.759214 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerStarted","Data":"bf09f03e52e784124b4e0d6ce01c3df81b1f5b1d2019d6c2886187774dc53d6e"} Dec 03 18:39:00 crc kubenswrapper[4687]: I1203 18:39:00.781185 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:39:01 crc kubenswrapper[4687]: I1203 18:39:01.776983 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerStarted","Data":"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f"} Dec 03 18:39:02 crc kubenswrapper[4687]: I1203 18:39:02.788275 4687 generic.go:334] "Generic (PLEG): container finished" podID="233d95cc-bded-464d-adf9-b306f35753c5" containerID="817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f" exitCode=0 Dec 03 18:39:02 crc kubenswrapper[4687]: I1203 18:39:02.788381 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerDied","Data":"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f"} Dec 03 18:39:03 crc kubenswrapper[4687]: I1203 18:39:03.797687 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerStarted","Data":"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60"} Dec 03 18:39:03 crc kubenswrapper[4687]: I1203 18:39:03.818161 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76mfq" podStartSLOduration=2.263621262 podStartE2EDuration="4.818137014s" podCreationTimestamp="2025-12-03 18:38:59 +0000 UTC" firstStartedPulling="2025-12-03 18:39:00.778503516 +0000 UTC m=+3573.669198949" lastFinishedPulling="2025-12-03 18:39:03.333019278 +0000 UTC m=+3576.223714701" observedRunningTime="2025-12-03 18:39:03.814985079 +0000 UTC m=+3576.705680522" watchObservedRunningTime="2025-12-03 18:39:03.818137014 +0000 UTC m=+3576.708832477" Dec 03 18:39:09 crc kubenswrapper[4687]: I1203 18:39:09.931809 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:09 crc kubenswrapper[4687]: I1203 18:39:09.932414 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:10 crc kubenswrapper[4687]: I1203 18:39:10.009024 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:10 crc kubenswrapper[4687]: I1203 18:39:10.912352 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:10 crc kubenswrapper[4687]: I1203 18:39:10.957204 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:39:12 crc kubenswrapper[4687]: I1203 18:39:12.888786 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76mfq" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="registry-server" containerID="cri-o://02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60" gracePeriod=2 Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.429692 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.573565 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdrv\" (UniqueName: \"kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv\") pod \"233d95cc-bded-464d-adf9-b306f35753c5\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.573906 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities\") pod \"233d95cc-bded-464d-adf9-b306f35753c5\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.574024 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content\") pod \"233d95cc-bded-464d-adf9-b306f35753c5\" (UID: \"233d95cc-bded-464d-adf9-b306f35753c5\") " Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.575186 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities" (OuterVolumeSpecName: "utilities") pod "233d95cc-bded-464d-adf9-b306f35753c5" (UID: "233d95cc-bded-464d-adf9-b306f35753c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.579880 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv" (OuterVolumeSpecName: "kube-api-access-tfdrv") pod "233d95cc-bded-464d-adf9-b306f35753c5" (UID: "233d95cc-bded-464d-adf9-b306f35753c5"). InnerVolumeSpecName "kube-api-access-tfdrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.614545 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "233d95cc-bded-464d-adf9-b306f35753c5" (UID: "233d95cc-bded-464d-adf9-b306f35753c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.676572 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.676610 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfdrv\" (UniqueName: \"kubernetes.io/projected/233d95cc-bded-464d-adf9-b306f35753c5-kube-api-access-tfdrv\") on node \"crc\" DevicePath \"\"" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.676624 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/233d95cc-bded-464d-adf9-b306f35753c5-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.904545 4687 generic.go:334] "Generic (PLEG): container finished" podID="233d95cc-bded-464d-adf9-b306f35753c5" containerID="02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60" exitCode=0 Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.904602 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76mfq" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.904623 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerDied","Data":"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60"} Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.904679 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76mfq" event={"ID":"233d95cc-bded-464d-adf9-b306f35753c5","Type":"ContainerDied","Data":"bf09f03e52e784124b4e0d6ce01c3df81b1f5b1d2019d6c2886187774dc53d6e"} Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.904718 4687 scope.go:117] "RemoveContainer" containerID="02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.942033 4687 scope.go:117] "RemoveContainer" containerID="817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f" Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.946575 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.954283 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76mfq"] Dec 03 18:39:13 crc kubenswrapper[4687]: I1203 18:39:13.969019 4687 scope.go:117] "RemoveContainer" containerID="d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.011982 4687 scope.go:117] "RemoveContainer" containerID="02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60" Dec 03 18:39:14 crc kubenswrapper[4687]: E1203 18:39:14.012698 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60\": container with ID starting with 02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60 not found: ID does not exist" containerID="02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.012753 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60"} err="failed to get container status \"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60\": rpc error: code = NotFound desc = could not find container \"02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60\": container with ID starting with 02ee7da3beb2297bf5f9fa9155f6ed0eda2e05d9a8ab6d1e6782348b0b68af60 not found: ID does not exist" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.012784 4687 scope.go:117] "RemoveContainer" containerID="817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f" Dec 03 18:39:14 crc kubenswrapper[4687]: E1203 18:39:14.013448 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f\": container with ID starting with 817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f not found: ID does not exist" containerID="817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.013512 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f"} err="failed to get container status \"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f\": rpc error: code = NotFound desc = could not find container \"817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f\": container with ID starting with 817dd8e8bceac9c0b5d8d8fb262c0228e9d3723ead3eafe81c2a42566b75f95f not found: ID does not exist" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.013543 4687 scope.go:117] "RemoveContainer" containerID="d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1" Dec 03 18:39:14 crc kubenswrapper[4687]: E1203 18:39:14.014062 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1\": container with ID starting with d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1 not found: ID does not exist" containerID="d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.014093 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1"} err="failed to get container status \"d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1\": rpc error: code = NotFound desc = could not find container \"d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1\": container with ID starting with d7baf90942729abaffb923989e449bcbdfb27da82c75ad8d1661ee8d05f95ef1 not found: ID does not exist" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.111349 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.111546 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.111602 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.112541 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:39:14 crc kubenswrapper[4687]: I1203 18:39:14.112614 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" gracePeriod=600 Dec 03 18:39:15 crc kubenswrapper[4687]: I1203 18:39:15.425663 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233d95cc-bded-464d-adf9-b306f35753c5" path="/var/lib/kubelet/pods/233d95cc-bded-464d-adf9-b306f35753c5/volumes" Dec 03 18:39:15 crc kubenswrapper[4687]: E1203 18:39:15.622639 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:39:15 crc kubenswrapper[4687]: I1203 18:39:15.930579 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" exitCode=0 Dec 03 18:39:15 crc kubenswrapper[4687]: I1203 18:39:15.930637 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8"} Dec 03 18:39:15 crc kubenswrapper[4687]: I1203 18:39:15.930697 4687 scope.go:117] "RemoveContainer" containerID="031c085439df18615aa88360df4c28dc0098da8335ec8be859bc4cf171d75d7a" Dec 03 18:39:15 crc kubenswrapper[4687]: I1203 18:39:15.931409 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:39:15 crc kubenswrapper[4687]: E1203 18:39:15.931733 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:39:28 crc kubenswrapper[4687]: I1203 18:39:28.407385 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:39:28 crc kubenswrapper[4687]: E1203 18:39:28.408501 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:39:39 crc kubenswrapper[4687]: I1203 18:39:39.407645 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:39:39 crc kubenswrapper[4687]: E1203 18:39:39.408432 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:39:52 crc kubenswrapper[4687]: I1203 18:39:52.407850 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:39:52 crc kubenswrapper[4687]: E1203 18:39:52.408589 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:40:03 crc kubenswrapper[4687]: I1203 18:40:03.407666 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:40:03 crc kubenswrapper[4687]: E1203 18:40:03.408392 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:40:18 crc kubenswrapper[4687]: I1203 18:40:18.407957 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:40:18 crc kubenswrapper[4687]: E1203 18:40:18.409459 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:40:29 crc kubenswrapper[4687]: I1203 18:40:29.407108 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:40:29 crc kubenswrapper[4687]: E1203 18:40:29.408090 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:40:40 crc kubenswrapper[4687]: I1203 18:40:40.408194 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:40:40 crc kubenswrapper[4687]: E1203 18:40:40.410099 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:40:54 crc kubenswrapper[4687]: I1203 18:40:54.407817 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:40:54 crc kubenswrapper[4687]: E1203 18:40:54.408501 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:41:06 crc kubenswrapper[4687]: I1203 18:41:06.407371 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:41:06 crc kubenswrapper[4687]: E1203 18:41:06.409452 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:41:18 crc kubenswrapper[4687]: I1203 18:41:18.407401 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:41:18 crc kubenswrapper[4687]: E1203 18:41:18.408033 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:41:29 crc kubenswrapper[4687]: I1203 18:41:29.407229 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:41:29 crc kubenswrapper[4687]: E1203 18:41:29.407977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:41:42 crc kubenswrapper[4687]: I1203 18:41:42.407749 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:41:42 crc kubenswrapper[4687]: E1203 18:41:42.408940 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:41:48 crc kubenswrapper[4687]: I1203 18:41:48.410840 4687 generic.go:334] "Generic (PLEG): container finished" podID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" containerID="c377f7ebca8b9d7cfd054a8f51990a2e176acdb10756da0c6c1cf1e448ffa83f" exitCode=0 Dec 03 18:41:48 crc kubenswrapper[4687]: I1203 18:41:48.410918 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3c56ab4c-455a-4436-927e-3dba7e4aa0ba","Type":"ContainerDied","Data":"c377f7ebca8b9d7cfd054a8f51990a2e176acdb10756da0c6c1cf1e448ffa83f"} Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.815470 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.911883 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.911956 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912047 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqfm5\" (UniqueName: \"kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912145 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912193 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912303 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912364 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912425 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.912498 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key\") pod \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\" (UID: \"3c56ab4c-455a-4436-927e-3dba7e4aa0ba\") " Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.913337 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.913840 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data" (OuterVolumeSpecName: "config-data") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.917888 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.922517 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5" (OuterVolumeSpecName: "kube-api-access-xqfm5") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "kube-api-access-xqfm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.922823 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.943267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.946016 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.946584 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:41:49 crc kubenswrapper[4687]: I1203 18:41:49.969744 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3c56ab4c-455a-4436-927e-3dba7e4aa0ba" (UID: "3c56ab4c-455a-4436-927e-3dba7e4aa0ba"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014511 4687 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014649 4687 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014715 4687 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014770 4687 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014823 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqfm5\" (UniqueName: \"kubernetes.io/projected/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-kube-api-access-xqfm5\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014876 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.014954 4687 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.015085 4687 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.015166 4687 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3c56ab4c-455a-4436-927e-3dba7e4aa0ba-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.034375 4687 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.116859 4687 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.435817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3c56ab4c-455a-4436-927e-3dba7e4aa0ba","Type":"ContainerDied","Data":"24f3459adddab09c715eca70496d477c499a7c7821b4061f08cb9639f4b3c2da"} Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.435890 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24f3459adddab09c715eca70496d477c499a7c7821b4061f08cb9639f4b3c2da" Dec 03 18:41:50 crc kubenswrapper[4687]: I1203 18:41:50.435904 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 03 18:41:53 crc kubenswrapper[4687]: I1203 18:41:53.408738 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:41:53 crc kubenswrapper[4687]: E1203 18:41:53.409793 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.227627 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 18:42:02 crc kubenswrapper[4687]: E1203 18:42:02.228633 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="extract-content" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.228653 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="extract-content" Dec 03 18:42:02 crc kubenswrapper[4687]: E1203 18:42:02.228690 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="registry-server" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.228701 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="registry-server" Dec 03 18:42:02 crc kubenswrapper[4687]: E1203 18:42:02.228725 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" containerName="tempest-tests-tempest-tests-runner" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.228734 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" containerName="tempest-tests-tempest-tests-runner" Dec 03 18:42:02 crc kubenswrapper[4687]: E1203 18:42:02.228770 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="extract-utilities" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.228779 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="extract-utilities" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.229008 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="233d95cc-bded-464d-adf9-b306f35753c5" containerName="registry-server" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.229029 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c56ab4c-455a-4436-927e-3dba7e4aa0ba" containerName="tempest-tests-tempest-tests-runner" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.230104 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.232594 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-ckmlh" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.248569 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.394441 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxvjh\" (UniqueName: \"kubernetes.io/projected/e1272d14-143a-4ce6-9b77-7fa6e7cd99f0-kube-api-access-mxvjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.394791 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.496104 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.496282 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxvjh\" (UniqueName: \"kubernetes.io/projected/e1272d14-143a-4ce6-9b77-7fa6e7cd99f0-kube-api-access-mxvjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.496654 4687 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.519565 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxvjh\" (UniqueName: \"kubernetes.io/projected/e1272d14-143a-4ce6-9b77-7fa6e7cd99f0-kube-api-access-mxvjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.531543 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:02 crc kubenswrapper[4687]: I1203 18:42:02.602341 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 03 18:42:03 crc kubenswrapper[4687]: I1203 18:42:03.068077 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 03 18:42:03 crc kubenswrapper[4687]: I1203 18:42:03.585167 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0","Type":"ContainerStarted","Data":"862a4946fe7b8e8da969102b7210e6a5c6077e501cb7c65c3f83a0b34ca7788e"} Dec 03 18:42:04 crc kubenswrapper[4687]: I1203 18:42:04.597494 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e1272d14-143a-4ce6-9b77-7fa6e7cd99f0","Type":"ContainerStarted","Data":"4bab82ea0127e243c244179a966a7e67ab6e1743464510a51f4b6ce2963b1911"} Dec 03 18:42:08 crc kubenswrapper[4687]: I1203 18:42:08.407266 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:42:08 crc kubenswrapper[4687]: E1203 18:42:08.408217 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:42:19 crc kubenswrapper[4687]: I1203 18:42:19.406855 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:42:19 crc kubenswrapper[4687]: E1203 18:42:19.409005 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.409107 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=24.595481888 podStartE2EDuration="25.409090099s" podCreationTimestamp="2025-12-03 18:42:02 +0000 UTC" firstStartedPulling="2025-12-03 18:42:03.080322104 +0000 UTC m=+3755.971017577" lastFinishedPulling="2025-12-03 18:42:03.893930355 +0000 UTC m=+3756.784625788" observedRunningTime="2025-12-03 18:42:04.613780875 +0000 UTC m=+3757.504476318" watchObservedRunningTime="2025-12-03 18:42:27.409090099 +0000 UTC m=+3780.299785532" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.429903 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zgs8/must-gather-9dcnt"] Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.431823 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4zgs8/must-gather-9dcnt"] Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.431911 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.434550 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4zgs8"/"openshift-service-ca.crt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.434566 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4zgs8"/"default-dockercfg-58sm6" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.434814 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4zgs8"/"kube-root-ca.crt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.513206 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.513325 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n4fv\" (UniqueName: \"kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.615986 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n4fv\" (UniqueName: \"kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.616279 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.616869 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.641876 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n4fv\" (UniqueName: \"kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv\") pod \"must-gather-9dcnt\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:27 crc kubenswrapper[4687]: I1203 18:42:27.750660 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:42:28 crc kubenswrapper[4687]: I1203 18:42:28.233348 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4zgs8/must-gather-9dcnt"] Dec 03 18:42:28 crc kubenswrapper[4687]: I1203 18:42:28.868393 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" event={"ID":"2e73d5dc-2b2f-46c3-a78b-3387644a03c0","Type":"ContainerStarted","Data":"a896175b6d877199e7faa99550677e00c48e631d4e500f4cec6ba61d79aa3ddb"} Dec 03 18:42:32 crc kubenswrapper[4687]: I1203 18:42:32.905615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" event={"ID":"2e73d5dc-2b2f-46c3-a78b-3387644a03c0","Type":"ContainerStarted","Data":"43afa43b247d32c9317057d3cdc4ec999a69f4488d81f3431b5c84218e215822"} Dec 03 18:42:32 crc kubenswrapper[4687]: I1203 18:42:32.906435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" event={"ID":"2e73d5dc-2b2f-46c3-a78b-3387644a03c0","Type":"ContainerStarted","Data":"6d974b287b097543f59893f06145c8930f4d0eaa421a2f1f8dc4b8ec037135a6"} Dec 03 18:42:32 crc kubenswrapper[4687]: I1203 18:42:32.937601 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" podStartSLOduration=2.368045095 podStartE2EDuration="5.937580766s" podCreationTimestamp="2025-12-03 18:42:27 +0000 UTC" firstStartedPulling="2025-12-03 18:42:28.241450786 +0000 UTC m=+3781.132146219" lastFinishedPulling="2025-12-03 18:42:31.810986457 +0000 UTC m=+3784.701681890" observedRunningTime="2025-12-03 18:42:32.921597484 +0000 UTC m=+3785.812292957" watchObservedRunningTime="2025-12-03 18:42:32.937580766 +0000 UTC m=+3785.828276210" Dec 03 18:42:33 crc kubenswrapper[4687]: I1203 18:42:33.407546 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:42:33 crc kubenswrapper[4687]: E1203 18:42:33.407914 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.372427 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-fpvpx"] Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.374035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.518872 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.519319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twt7\" (UniqueName: \"kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.621442 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twt7\" (UniqueName: \"kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.621583 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.621796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.639797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twt7\" (UniqueName: \"kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7\") pod \"crc-debug-fpvpx\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.689666 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:42:36 crc kubenswrapper[4687]: I1203 18:42:36.941274 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" event={"ID":"d863637d-0f22-43a2-a898-4440de4ce63c","Type":"ContainerStarted","Data":"67a64c9cbb919ccb341491af6c0c3847cd5f487402ea590a59e23179489fb78a"} Dec 03 18:42:45 crc kubenswrapper[4687]: I1203 18:42:45.407583 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:42:45 crc kubenswrapper[4687]: E1203 18:42:45.408430 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:42:48 crc kubenswrapper[4687]: I1203 18:42:48.048473 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" event={"ID":"d863637d-0f22-43a2-a898-4440de4ce63c","Type":"ContainerStarted","Data":"7cdb4087491cd0a7de3862ddd322d0ee38a1ffecc0c18cd6440341c0c73ac7ad"} Dec 03 18:42:48 crc kubenswrapper[4687]: I1203 18:42:48.078904 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" podStartSLOduration=1.832830711 podStartE2EDuration="12.078884726s" podCreationTimestamp="2025-12-03 18:42:36 +0000 UTC" firstStartedPulling="2025-12-03 18:42:36.72768335 +0000 UTC m=+3789.618378783" lastFinishedPulling="2025-12-03 18:42:46.973737355 +0000 UTC m=+3799.864432798" observedRunningTime="2025-12-03 18:42:48.06236232 +0000 UTC m=+3800.953057763" watchObservedRunningTime="2025-12-03 18:42:48.078884726 +0000 UTC m=+3800.969580159" Dec 03 18:42:57 crc kubenswrapper[4687]: I1203 18:42:57.414772 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:42:57 crc kubenswrapper[4687]: E1203 18:42:57.415705 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:43:09 crc kubenswrapper[4687]: I1203 18:43:09.407177 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:43:09 crc kubenswrapper[4687]: E1203 18:43:09.407938 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:43:24 crc kubenswrapper[4687]: I1203 18:43:24.407391 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:43:24 crc kubenswrapper[4687]: E1203 18:43:24.408076 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:43:27 crc kubenswrapper[4687]: I1203 18:43:27.410717 4687 generic.go:334] "Generic (PLEG): container finished" podID="d863637d-0f22-43a2-a898-4440de4ce63c" containerID="7cdb4087491cd0a7de3862ddd322d0ee38a1ffecc0c18cd6440341c0c73ac7ad" exitCode=0 Dec 03 18:43:27 crc kubenswrapper[4687]: I1203 18:43:27.419855 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" event={"ID":"d863637d-0f22-43a2-a898-4440de4ce63c","Type":"ContainerDied","Data":"7cdb4087491cd0a7de3862ddd322d0ee38a1ffecc0c18cd6440341c0c73ac7ad"} Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.529334 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.562176 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-fpvpx"] Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.570263 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-fpvpx"] Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.708661 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8twt7\" (UniqueName: \"kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7\") pod \"d863637d-0f22-43a2-a898-4440de4ce63c\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.708843 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host\") pod \"d863637d-0f22-43a2-a898-4440de4ce63c\" (UID: \"d863637d-0f22-43a2-a898-4440de4ce63c\") " Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.709047 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host" (OuterVolumeSpecName: "host") pod "d863637d-0f22-43a2-a898-4440de4ce63c" (UID: "d863637d-0f22-43a2-a898-4440de4ce63c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.709418 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d863637d-0f22-43a2-a898-4440de4ce63c-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.718160 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7" (OuterVolumeSpecName: "kube-api-access-8twt7") pod "d863637d-0f22-43a2-a898-4440de4ce63c" (UID: "d863637d-0f22-43a2-a898-4440de4ce63c"). InnerVolumeSpecName "kube-api-access-8twt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:43:28 crc kubenswrapper[4687]: I1203 18:43:28.811388 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8twt7\" (UniqueName: \"kubernetes.io/projected/d863637d-0f22-43a2-a898-4440de4ce63c-kube-api-access-8twt7\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.417575 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d863637d-0f22-43a2-a898-4440de4ce63c" path="/var/lib/kubelet/pods/d863637d-0f22-43a2-a898-4440de4ce63c/volumes" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.436084 4687 scope.go:117] "RemoveContainer" containerID="7cdb4087491cd0a7de3862ddd322d0ee38a1ffecc0c18cd6440341c0c73ac7ad" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.436153 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-fpvpx" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.718111 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-bgzxf"] Dec 03 18:43:29 crc kubenswrapper[4687]: E1203 18:43:29.718811 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d863637d-0f22-43a2-a898-4440de4ce63c" containerName="container-00" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.718823 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="d863637d-0f22-43a2-a898-4440de4ce63c" containerName="container-00" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.719010 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="d863637d-0f22-43a2-a898-4440de4ce63c" containerName="container-00" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.719609 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.828134 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.828176 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmts9\" (UniqueName: \"kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.930164 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.930235 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmts9\" (UniqueName: \"kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.930811 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:29 crc kubenswrapper[4687]: I1203 18:43:29.957618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmts9\" (UniqueName: \"kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9\") pod \"crc-debug-bgzxf\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:30 crc kubenswrapper[4687]: I1203 18:43:30.037510 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:30 crc kubenswrapper[4687]: W1203 18:43:30.100315 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod029f0f15_16f3_439e_9d4f_b9b684e6b1b2.slice/crio-3bcbfb1c90541d4628005ed2d858bde4e98d53938f38c7a2144dc33995f1dfea WatchSource:0}: Error finding container 3bcbfb1c90541d4628005ed2d858bde4e98d53938f38c7a2144dc33995f1dfea: Status 404 returned error can't find the container with id 3bcbfb1c90541d4628005ed2d858bde4e98d53938f38c7a2144dc33995f1dfea Dec 03 18:43:30 crc kubenswrapper[4687]: I1203 18:43:30.448847 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" event={"ID":"029f0f15-16f3-439e-9d4f-b9b684e6b1b2","Type":"ContainerStarted","Data":"3bcbfb1c90541d4628005ed2d858bde4e98d53938f38c7a2144dc33995f1dfea"} Dec 03 18:43:31 crc kubenswrapper[4687]: I1203 18:43:31.465435 4687 generic.go:334] "Generic (PLEG): container finished" podID="029f0f15-16f3-439e-9d4f-b9b684e6b1b2" containerID="31f46be6b87105729d8b6ace11182468b50c57175ecd5a49f1a07cb93c075588" exitCode=0 Dec 03 18:43:31 crc kubenswrapper[4687]: I1203 18:43:31.465564 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" event={"ID":"029f0f15-16f3-439e-9d4f-b9b684e6b1b2","Type":"ContainerDied","Data":"31f46be6b87105729d8b6ace11182468b50c57175ecd5a49f1a07cb93c075588"} Dec 03 18:43:31 crc kubenswrapper[4687]: I1203 18:43:31.957911 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-bgzxf"] Dec 03 18:43:31 crc kubenswrapper[4687]: I1203 18:43:31.968858 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-bgzxf"] Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.584213 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.713444 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmts9\" (UniqueName: \"kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9\") pod \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.713580 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host\") pod \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\" (UID: \"029f0f15-16f3-439e-9d4f-b9b684e6b1b2\") " Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.713814 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host" (OuterVolumeSpecName: "host") pod "029f0f15-16f3-439e-9d4f-b9b684e6b1b2" (UID: "029f0f15-16f3-439e-9d4f-b9b684e6b1b2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.714848 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.719068 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9" (OuterVolumeSpecName: "kube-api-access-dmts9") pod "029f0f15-16f3-439e-9d4f-b9b684e6b1b2" (UID: "029f0f15-16f3-439e-9d4f-b9b684e6b1b2"). InnerVolumeSpecName "kube-api-access-dmts9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:43:32 crc kubenswrapper[4687]: I1203 18:43:32.817570 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmts9\" (UniqueName: \"kubernetes.io/projected/029f0f15-16f3-439e-9d4f-b9b684e6b1b2-kube-api-access-dmts9\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.133657 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-lt6kg"] Dec 03 18:43:33 crc kubenswrapper[4687]: E1203 18:43:33.134158 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="029f0f15-16f3-439e-9d4f-b9b684e6b1b2" containerName="container-00" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.134178 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="029f0f15-16f3-439e-9d4f-b9b684e6b1b2" containerName="container-00" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.134422 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="029f0f15-16f3-439e-9d4f-b9b684e6b1b2" containerName="container-00" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.135074 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.224369 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6dqp\" (UniqueName: \"kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.224509 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.326050 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6dqp\" (UniqueName: \"kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.326113 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.326290 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.344491 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6dqp\" (UniqueName: \"kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp\") pod \"crc-debug-lt6kg\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.427302 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="029f0f15-16f3-439e-9d4f-b9b684e6b1b2" path="/var/lib/kubelet/pods/029f0f15-16f3-439e-9d4f-b9b684e6b1b2/volumes" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.457027 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.484591 4687 scope.go:117] "RemoveContainer" containerID="31f46be6b87105729d8b6ace11182468b50c57175ecd5a49f1a07cb93c075588" Dec 03 18:43:33 crc kubenswrapper[4687]: I1203 18:43:33.484634 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-bgzxf" Dec 03 18:43:33 crc kubenswrapper[4687]: W1203 18:43:33.489888 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc04adcab_4386_4c69_a59f_52c10525e860.slice/crio-33414e8b6ce4ae3bb96e9743b913bc8455e6d17237fdbf2e067b904f3c3b02af WatchSource:0}: Error finding container 33414e8b6ce4ae3bb96e9743b913bc8455e6d17237fdbf2e067b904f3c3b02af: Status 404 returned error can't find the container with id 33414e8b6ce4ae3bb96e9743b913bc8455e6d17237fdbf2e067b904f3c3b02af Dec 03 18:43:34 crc kubenswrapper[4687]: I1203 18:43:34.496263 4687 generic.go:334] "Generic (PLEG): container finished" podID="c04adcab-4386-4c69-a59f-52c10525e860" containerID="7a015ecf71fede4677575e3e2d362f6a0d0304bf121c4bf91ef5ac62eace87da" exitCode=0 Dec 03 18:43:34 crc kubenswrapper[4687]: I1203 18:43:34.496374 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" event={"ID":"c04adcab-4386-4c69-a59f-52c10525e860","Type":"ContainerDied","Data":"7a015ecf71fede4677575e3e2d362f6a0d0304bf121c4bf91ef5ac62eace87da"} Dec 03 18:43:34 crc kubenswrapper[4687]: I1203 18:43:34.497825 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" event={"ID":"c04adcab-4386-4c69-a59f-52c10525e860","Type":"ContainerStarted","Data":"33414e8b6ce4ae3bb96e9743b913bc8455e6d17237fdbf2e067b904f3c3b02af"} Dec 03 18:43:34 crc kubenswrapper[4687]: I1203 18:43:34.542728 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-lt6kg"] Dec 03 18:43:34 crc kubenswrapper[4687]: I1203 18:43:34.550403 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zgs8/crc-debug-lt6kg"] Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.620395 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.789349 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host\") pod \"c04adcab-4386-4c69-a59f-52c10525e860\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.789494 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6dqp\" (UniqueName: \"kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp\") pod \"c04adcab-4386-4c69-a59f-52c10525e860\" (UID: \"c04adcab-4386-4c69-a59f-52c10525e860\") " Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.790816 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host" (OuterVolumeSpecName: "host") pod "c04adcab-4386-4c69-a59f-52c10525e860" (UID: "c04adcab-4386-4c69-a59f-52c10525e860"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.795447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp" (OuterVolumeSpecName: "kube-api-access-b6dqp") pod "c04adcab-4386-4c69-a59f-52c10525e860" (UID: "c04adcab-4386-4c69-a59f-52c10525e860"). InnerVolumeSpecName "kube-api-access-b6dqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.892659 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6dqp\" (UniqueName: \"kubernetes.io/projected/c04adcab-4386-4c69-a59f-52c10525e860-kube-api-access-b6dqp\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:35 crc kubenswrapper[4687]: I1203 18:43:35.892707 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c04adcab-4386-4c69-a59f-52c10525e860-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:43:36 crc kubenswrapper[4687]: I1203 18:43:36.515791 4687 scope.go:117] "RemoveContainer" containerID="7a015ecf71fede4677575e3e2d362f6a0d0304bf121c4bf91ef5ac62eace87da" Dec 03 18:43:36 crc kubenswrapper[4687]: I1203 18:43:36.515832 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/crc-debug-lt6kg" Dec 03 18:43:37 crc kubenswrapper[4687]: I1203 18:43:37.417276 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04adcab-4386-4c69-a59f-52c10525e860" path="/var/lib/kubelet/pods/c04adcab-4386-4c69-a59f-52c10525e860/volumes" Dec 03 18:43:38 crc kubenswrapper[4687]: I1203 18:43:38.407504 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:43:38 crc kubenswrapper[4687]: E1203 18:43:38.407920 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.287211 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f84949b66-zfm22_c37e3f72-636e-4175-a805-8b2aa8f52eca/barbican-api/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.394658 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f84949b66-zfm22_c37e3f72-636e-4175-a805-8b2aa8f52eca/barbican-api-log/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.407483 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:43:49 crc kubenswrapper[4687]: E1203 18:43:49.407887 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.521456 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b984dc754-pn82p_68f5675a-1ac6-475a-b0ba-b83e975e838f/barbican-keystone-listener-log/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.534227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b984dc754-pn82p_68f5675a-1ac6-475a-b0ba-b83e975e838f/barbican-keystone-listener/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.697079 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc58d75dc-vk2m4_6f8ac0e6-dadf-44e8-8e92-56c306da2a8e/barbican-worker/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.731984 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc58d75dc-vk2m4_6f8ac0e6-dadf-44e8-8e92-56c306da2a8e/barbican-worker-log/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.838504 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf_6dcace96-ba84-4176-9fa0-216e86ae113b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.926534 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/ceilometer-central-agent/0.log" Dec 03 18:43:49 crc kubenswrapper[4687]: I1203 18:43:49.976495 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/ceilometer-notification-agent/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.041806 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/proxy-httpd/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.089427 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/sg-core/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.180277 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4766b79-a447-4290-bbe9-dc10a59ced40/cinder-api/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.230148 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4766b79-a447-4290-bbe9-dc10a59ced40/cinder-api-log/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.539183 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_05480209-7592-4ddf-a2d9-f06d4dce2c75/cinder-scheduler/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.573789 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_05480209-7592-4ddf-a2d9-f06d4dce2c75/probe/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.717571 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn_bd21b7de-e79a-45b6-a3ea-9fb73f55fea8/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.823729 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk_d79dbe03-ec71-4fc7-8237-b3094ecb81ca/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:50 crc kubenswrapper[4687]: I1203 18:43:50.917921 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/init/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.104193 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/init/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.118968 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-88lsw_283f8d5d-eee3-4591-b0d2-65c3cc8fa78f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.224772 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/dnsmasq-dns/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.326349 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a919d81a-089d-4146-a4ee-c2db16491d11/glance-httpd/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.369201 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a919d81a-089d-4146-a4ee-c2db16491d11/glance-log/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.517336 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c3524def-b150-4d8d-9315-b4435781cf34/glance-httpd/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.558327 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c3524def-b150-4d8d-9315-b4435781cf34/glance-log/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.788077 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6968cc7b7b-57qh6_b08dc684-ab9f-41db-a259-2d06b757f3cf/horizon/0.log" Dec 03 18:43:51 crc kubenswrapper[4687]: I1203 18:43:51.853744 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls_ff93c8d7-1225-45d9-952c-f770d7ad7e33/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.066684 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6968cc7b7b-57qh6_b08dc684-ab9f-41db-a259-2d06b757f3cf/horizon-log/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.159893 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-89dpx_a9c34c4b-6990-485c-91b7-c07c7191c398/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.287837 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413081-r8x9h_b058710c-db65-4f53-b9b7-2e279672355a/keystone-cron/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.427256 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7fc787b46b-k9z8g_42536d5c-2479-4f9f-a6ff-d3705bb42b8f/keystone-api/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.486376 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_976b9b5d-29fa-48e5-a77a-f3f5a480ad94/kube-state-metrics/0.log" Dec 03 18:43:52 crc kubenswrapper[4687]: I1203 18:43:52.636217 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp_e3ca0b80-1626-411c-b15c-c66f1f18cf9e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:53 crc kubenswrapper[4687]: I1203 18:43:53.011227 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d44b68cb5-gzqxl_120144a6-19ba-4119-9ef7-7c70664c5e0c/neutron-httpd/0.log" Dec 03 18:43:53 crc kubenswrapper[4687]: I1203 18:43:53.082464 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d44b68cb5-gzqxl_120144a6-19ba-4119-9ef7-7c70664c5e0c/neutron-api/0.log" Dec 03 18:43:53 crc kubenswrapper[4687]: I1203 18:43:53.260667 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6_7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:53 crc kubenswrapper[4687]: I1203 18:43:53.736575 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_be4907b0-15af-400a-8430-ee3890e80010/nova-cell0-conductor-conductor/0.log" Dec 03 18:43:53 crc kubenswrapper[4687]: I1203 18:43:53.748507 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63033eea-9708-468e-b1e6-87e6882a5c75/nova-api-log/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.071952 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d6d72bd8-fd40-4856-96ee-f753ba4c170b/nova-cell1-conductor-conductor/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.076143 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63033eea-9708-468e-b1e6-87e6882a5c75/nova-api-api/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.083032 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.318462 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l9stg_90387c4a-7957-4b6a-983a-0608fe7a0977/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.412098 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c0ff347c-1775-431c-bc91-ed5a80ee620e/nova-metadata-log/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.692572 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3be8282f-510f-4d0d-a98f-8aab605e3805/nova-scheduler-scheduler/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.838061 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/mysql-bootstrap/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.965591 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/mysql-bootstrap/0.log" Dec 03 18:43:54 crc kubenswrapper[4687]: I1203 18:43:54.997584 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/galera/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.192926 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/mysql-bootstrap/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.325281 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/mysql-bootstrap/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.373621 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/galera/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.581811 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b2bf6226-8105-471c-8098-0786e52ab01d/openstackclient/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.681774 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2lczs_3037eba1-1fab-4d56-a3f0-1cecb58b3f7a/ovn-controller/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.683049 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c0ff347c-1775-431c-bc91-ed5a80ee620e/nova-metadata-metadata/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.804784 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4sqs2_53de0da8-3b25-403a-9956-79082a62780b/openstack-network-exporter/0.log" Dec 03 18:43:55 crc kubenswrapper[4687]: I1203 18:43:55.979776 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server-init/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.097208 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.099317 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovs-vswitchd/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.135471 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server-init/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.312298 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe36f76e-b5b2-4dfe-923b-0516ea0af76f/openstack-network-exporter/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.325287 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tqzn2_cf4db291-8ad7-4e7e-8843-29e3287b05ca/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.440932 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe36f76e-b5b2-4dfe-923b-0516ea0af76f/ovn-northd/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.621640 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aff56e13-4338-42bd-a378-b0d72daa296e/openstack-network-exporter/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.663615 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aff56e13-4338-42bd-a378-b0d72daa296e/ovsdbserver-nb/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.778151 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2e41fb58-0d75-4204-85eb-7c5526d637e6/openstack-network-exporter/0.log" Dec 03 18:43:56 crc kubenswrapper[4687]: I1203 18:43:56.807250 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2e41fb58-0d75-4204-85eb-7c5526d637e6/ovsdbserver-sb/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.018077 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-699567968b-hhzfv_66dbaeab-7905-40ae-9e1e-3674573a1aa3/placement-api/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.115653 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-699567968b-hhzfv_66dbaeab-7905-40ae-9e1e-3674573a1aa3/placement-log/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.124022 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/setup-container/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.366684 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/rabbitmq/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.441879 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/setup-container/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.461297 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/setup-container/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.682972 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/setup-container/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.699989 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/rabbitmq/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.700393 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h_b8e74449-f8e1-4cf8-8a93-e04ee18070e1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:57 crc kubenswrapper[4687]: I1203 18:43:57.960020 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lrxk2_788d4c10-cc61-4086-8e29-6dcdf6592f4a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.014216 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8_416ff6ab-b4d6-451c-8219-1db28ce18f92/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.223878 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kcjpj_ba0bb298-d1f4-478c-a663-9a8e20bfdcfd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.273745 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ssfcd_cc101fd4-addb-4d63-b123-d0c54197956c/ssh-known-hosts-edpm-deployment/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.534514 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd478575-t6xjs_70063881-c779-4ed9-9258-a175b3ee15f4/proxy-httpd/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.543542 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd478575-t6xjs_70063881-c779-4ed9-9258-a175b3ee15f4/proxy-server/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.707110 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kl6nk_9f72b95f-3e3d-49b4-8bca-8d391384a077/swift-ring-rebalance/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.750974 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-auditor/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.816111 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-reaper/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.970231 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-server/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.973034 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-replicator/0.log" Dec 03 18:43:58 crc kubenswrapper[4687]: I1203 18:43:58.997111 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-auditor/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.133479 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-replicator/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.200495 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-updater/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.246267 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-server/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.269208 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-auditor/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.355481 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-expirer/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.479896 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-updater/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.488701 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-server/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.546766 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-replicator/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.568909 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/rsync/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.743685 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/swift-recon-cron/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.789245 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4c62j_0ce84a46-82bc-42a8-b645-d801d2a8edff/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:43:59 crc kubenswrapper[4687]: I1203 18:43:59.963697 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3c56ab4c-455a-4436-927e-3dba7e4aa0ba/tempest-tests-tempest-tests-runner/0.log" Dec 03 18:44:00 crc kubenswrapper[4687]: I1203 18:44:00.106155 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e1272d14-143a-4ce6-9b77-7fa6e7cd99f0/test-operator-logs-container/0.log" Dec 03 18:44:00 crc kubenswrapper[4687]: I1203 18:44:00.131839 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ztppq_6c19f653-0ec6-4a75-a396-dacbe41c2c2e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:44:02 crc kubenswrapper[4687]: I1203 18:44:02.407803 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:44:02 crc kubenswrapper[4687]: E1203 18:44:02.408392 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.095351 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:10 crc kubenswrapper[4687]: E1203 18:44:10.096600 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04adcab-4386-4c69-a59f-52c10525e860" containerName="container-00" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.096618 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04adcab-4386-4c69-a59f-52c10525e860" containerName="container-00" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.096914 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04adcab-4386-4c69-a59f-52c10525e860" containerName="container-00" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.098526 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.154279 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.192889 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.193080 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.193284 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5qx\" (UniqueName: \"kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.296374 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.296505 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.296559 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5qx\" (UniqueName: \"kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.297396 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.297622 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.319156 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5qx\" (UniqueName: \"kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx\") pod \"redhat-operators-q2r2b\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.440516 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.649470 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b6b36375-980f-4c1d-8ddb-61d9565db565/memcached/0.log" Dec 03 18:44:10 crc kubenswrapper[4687]: I1203 18:44:10.956323 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:11 crc kubenswrapper[4687]: I1203 18:44:11.818239 4687 generic.go:334] "Generic (PLEG): container finished" podID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerID="36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba" exitCode=0 Dec 03 18:44:11 crc kubenswrapper[4687]: I1203 18:44:11.818603 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerDied","Data":"36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba"} Dec 03 18:44:11 crc kubenswrapper[4687]: I1203 18:44:11.818636 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerStarted","Data":"f7dae44444355dfb29f9f4a7dc98eddcc1924e18df3576877880018ae87e36da"} Dec 03 18:44:11 crc kubenswrapper[4687]: I1203 18:44:11.820300 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:44:13 crc kubenswrapper[4687]: I1203 18:44:13.836144 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerStarted","Data":"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5"} Dec 03 18:44:15 crc kubenswrapper[4687]: I1203 18:44:15.408009 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:44:15 crc kubenswrapper[4687]: I1203 18:44:15.855004 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99"} Dec 03 18:44:16 crc kubenswrapper[4687]: I1203 18:44:16.865211 4687 generic.go:334] "Generic (PLEG): container finished" podID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerID="6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5" exitCode=0 Dec 03 18:44:16 crc kubenswrapper[4687]: I1203 18:44:16.865275 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerDied","Data":"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5"} Dec 03 18:44:19 crc kubenswrapper[4687]: I1203 18:44:19.892045 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerStarted","Data":"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414"} Dec 03 18:44:19 crc kubenswrapper[4687]: I1203 18:44:19.909361 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2r2b" podStartSLOduration=2.8061738 podStartE2EDuration="9.909340903s" podCreationTimestamp="2025-12-03 18:44:10 +0000 UTC" firstStartedPulling="2025-12-03 18:44:11.81993364 +0000 UTC m=+3884.710629073" lastFinishedPulling="2025-12-03 18:44:18.923100733 +0000 UTC m=+3891.813796176" observedRunningTime="2025-12-03 18:44:19.907677729 +0000 UTC m=+3892.798373162" watchObservedRunningTime="2025-12-03 18:44:19.909340903 +0000 UTC m=+3892.800036346" Dec 03 18:44:20 crc kubenswrapper[4687]: I1203 18:44:20.442385 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:20 crc kubenswrapper[4687]: I1203 18:44:20.442469 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:21 crc kubenswrapper[4687]: I1203 18:44:21.501735 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2r2b" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="registry-server" probeResult="failure" output=< Dec 03 18:44:21 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:44:21 crc kubenswrapper[4687]: > Dec 03 18:44:26 crc kubenswrapper[4687]: I1203 18:44:26.745566 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zmzr6_d3d2df8d-6f3d-4f5d-afd3-cef00553188e/kube-rbac-proxy/0.log" Dec 03 18:44:26 crc kubenswrapper[4687]: I1203 18:44:26.918321 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zmzr6_d3d2df8d-6f3d-4f5d-afd3-cef00553188e/manager/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.010759 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.156187 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.196949 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.219053 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.402996 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.416960 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.439594 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/extract/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.616315 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fn5xb_f7046b74-0868-4ee1-b917-56e695a94d16/manager/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.640445 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fn5xb_f7046b74-0868-4ee1-b917-56e695a94d16/kube-rbac-proxy/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.693142 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-6xgff_6fa88489-3c47-4369-9f87-a3f029f75a42/kube-rbac-proxy/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.852980 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-6xgff_6fa88489-3c47-4369-9f87-a3f029f75a42/manager/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.856598 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-nwzp4_496e4d0a-a886-4d53-993c-66081d8843ae/kube-rbac-proxy/0.log" Dec 03 18:44:27 crc kubenswrapper[4687]: I1203 18:44:27.990606 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-nwzp4_496e4d0a-a886-4d53-993c-66081d8843ae/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.094064 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h6x45_b63b97e0-be73-4e96-9904-9f5c030a0afb/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.095795 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h6x45_b63b97e0-be73-4e96-9904-9f5c030a0afb/kube-rbac-proxy/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.252898 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-sdbgv_0e3acf7a-4766-4f89-9f70-d5ec2690318b/kube-rbac-proxy/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.325336 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-sdbgv_0e3acf7a-4766-4f89-9f70-d5ec2690318b/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.471741 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lx2md_6abb698e-8c6d-40c8-b87d-dcd828bba5d3/kube-rbac-proxy/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.568178 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-fcwrt_e48eab37-9bd2-4f8d-892a-4436c68bab21/kube-rbac-proxy/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.596205 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lx2md_6abb698e-8c6d-40c8-b87d-dcd828bba5d3/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.740487 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-fcwrt_e48eab37-9bd2-4f8d-892a-4436c68bab21/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.793953 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-mzvdw_e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b/kube-rbac-proxy/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.872569 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-mzvdw_e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b/manager/0.log" Dec 03 18:44:28 crc kubenswrapper[4687]: I1203 18:44:28.998697 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-hrlqq_59db1fe9-9d85-4346-8718-4e9139c8acb9/kube-rbac-proxy/0.log" Dec 03 18:44:29 crc kubenswrapper[4687]: I1203 18:44:29.016661 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-hrlqq_59db1fe9-9d85-4346-8718-4e9139c8acb9/manager/0.log" Dec 03 18:44:29 crc kubenswrapper[4687]: I1203 18:44:29.284683 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-7ftp5_d57e7a62-6958-4e64-98e6-a22857b00e32/kube-rbac-proxy/0.log" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.302869 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bfwb6_491fb200-3ef9-4833-83c6-22b575b46998/kube-rbac-proxy/0.log" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.327035 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bfwb6_491fb200-3ef9-4833-83c6-22b575b46998/manager/0.log" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.343708 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-7ftp5_d57e7a62-6958-4e64-98e6-a22857b00e32/manager/0.log" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.520862 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.619561 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:30 crc kubenswrapper[4687]: I1203 18:44:30.775148 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.373357 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xj6hg_5952221c-60d0-4159-bbd8-2adf2f1e3d8e/kube-rbac-proxy/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.447903 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9pldz_379ff892-6dae-4b1b-9ae1-f6b7da9f4db6/manager/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.565084 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9pldz_379ff892-6dae-4b1b-9ae1-f6b7da9f4db6/kube-rbac-proxy/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.577524 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w_58a46d42-dade-4bfe-b9b0-bddac75f1d81/kube-rbac-proxy/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.742548 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xj6hg_5952221c-60d0-4159-bbd8-2adf2f1e3d8e/manager/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.900647 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w_58a46d42-dade-4bfe-b9b0-bddac75f1d81/manager/0.log" Dec 03 18:44:31 crc kubenswrapper[4687]: I1203 18:44:31.995553 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2r2b" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="registry-server" containerID="cri-o://41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414" gracePeriod=2 Dec 03 18:44:32 crc kubenswrapper[4687]: I1203 18:44:32.752951 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5r6vj_8e1a26a4-e1d4-4d8f-a452-a86a688788f3/registry-server/0.log" Dec 03 18:44:32 crc kubenswrapper[4687]: I1203 18:44:32.824207 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-586db6c45c-hj8pp_c70399a2-304f-40f7-9f8e-b566d290ede2/operator/0.log" Dec 03 18:44:32 crc kubenswrapper[4687]: I1203 18:44:32.991054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xjjxv_1655eb12-9c61-4959-9886-bd6f50b95292/kube-rbac-proxy/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.009000 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.024052 4687 generic.go:334] "Generic (PLEG): container finished" podID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerID="41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414" exitCode=0 Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.024106 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerDied","Data":"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414"} Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.024213 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2r2b" event={"ID":"5a6976b2-c936-4246-bec9-d1967a7a85b5","Type":"ContainerDied","Data":"f7dae44444355dfb29f9f4a7dc98eddcc1924e18df3576877880018ae87e36da"} Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.024245 4687 scope.go:117] "RemoveContainer" containerID="41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.086810 4687 scope.go:117] "RemoveContainer" containerID="6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.142403 4687 scope.go:117] "RemoveContainer" containerID="36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.155015 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5qx\" (UniqueName: \"kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx\") pod \"5a6976b2-c936-4246-bec9-d1967a7a85b5\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.155115 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities\") pod \"5a6976b2-c936-4246-bec9-d1967a7a85b5\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.155167 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content\") pod \"5a6976b2-c936-4246-bec9-d1967a7a85b5\" (UID: \"5a6976b2-c936-4246-bec9-d1967a7a85b5\") " Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.159111 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities" (OuterVolumeSpecName: "utilities") pod "5a6976b2-c936-4246-bec9-d1967a7a85b5" (UID: "5a6976b2-c936-4246-bec9-d1967a7a85b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.161170 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xjjxv_1655eb12-9c61-4959-9886-bd6f50b95292/manager/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.180517 4687 scope.go:117] "RemoveContainer" containerID="41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414" Dec 03 18:44:33 crc kubenswrapper[4687]: E1203 18:44:33.184158 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414\": container with ID starting with 41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414 not found: ID does not exist" containerID="41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.184208 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414"} err="failed to get container status \"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414\": rpc error: code = NotFound desc = could not find container \"41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414\": container with ID starting with 41840adbd4254c0c2b6b190546abfa7ee4d67f82f9ab4ab8ad96819be1295414 not found: ID does not exist" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.184242 4687 scope.go:117] "RemoveContainer" containerID="6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5" Dec 03 18:44:33 crc kubenswrapper[4687]: E1203 18:44:33.188276 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5\": container with ID starting with 6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5 not found: ID does not exist" containerID="6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.188327 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5"} err="failed to get container status \"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5\": rpc error: code = NotFound desc = could not find container \"6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5\": container with ID starting with 6e700da219c5dfa02383e70422d6189e18905db9bcb30b8d8baa68b5390b84b5 not found: ID does not exist" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.188358 4687 scope.go:117] "RemoveContainer" containerID="36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba" Dec 03 18:44:33 crc kubenswrapper[4687]: E1203 18:44:33.193290 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba\": container with ID starting with 36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba not found: ID does not exist" containerID="36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.193327 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba"} err="failed to get container status \"36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba\": rpc error: code = NotFound desc = could not find container \"36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba\": container with ID starting with 36c5da5949ca8cff2a104d2864a90719871b3545a200783aa52856ce03f9fcba not found: ID does not exist" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.202089 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx" (OuterVolumeSpecName: "kube-api-access-7l5qx") pod "5a6976b2-c936-4246-bec9-d1967a7a85b5" (UID: "5a6976b2-c936-4246-bec9-d1967a7a85b5"). InnerVolumeSpecName "kube-api-access-7l5qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.261296 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5qx\" (UniqueName: \"kubernetes.io/projected/5a6976b2-c936-4246-bec9-d1967a7a85b5-kube-api-access-7l5qx\") on node \"crc\" DevicePath \"\"" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.261335 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.444447 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a6976b2-c936-4246-bec9-d1967a7a85b5" (UID: "5a6976b2-c936-4246-bec9-d1967a7a85b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.464557 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a6976b2-c936-4246-bec9-d1967a7a85b5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.669575 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-vpdn7_e0b4d539-a10d-4f94-8097-667df133713d/kube-rbac-proxy/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.752095 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-vpdn7_e0b4d539-a10d-4f94-8097-667df133713d/manager/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.767689 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wpfh8_9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3/operator/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.894512 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65f8659594-f2bcj_a9c3ecf7-40b8-43a9-902d-0fe02be37037/manager/0.log" Dec 03 18:44:33 crc kubenswrapper[4687]: I1203 18:44:33.922215 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-gbkkg_6e6fc336-ee86-4c81-bbc7-76b241f4cffa/kube-rbac-proxy/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.028742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-gbkkg_6e6fc336-ee86-4c81-bbc7-76b241f4cffa/manager/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.032263 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2r2b" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.052211 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.062577 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2r2b"] Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.070315 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vxwfl_785c9182-9230-4d64-9a16-81877ee4d03e/kube-rbac-proxy/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.128461 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vxwfl_785c9182-9230-4d64-9a16-81877ee4d03e/manager/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.211241 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-58bfx_f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf/kube-rbac-proxy/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.299967 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-58bfx_f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf/manager/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.307419 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-xvq78_b119316e-0e6a-43d8-a5e3-0068f099fad0/kube-rbac-proxy/0.log" Dec 03 18:44:34 crc kubenswrapper[4687]: I1203 18:44:34.366902 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-xvq78_b119316e-0e6a-43d8-a5e3-0068f099fad0/manager/0.log" Dec 03 18:44:35 crc kubenswrapper[4687]: I1203 18:44:35.416794 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" path="/var/lib/kubelet/pods/5a6976b2-c936-4246-bec9-d1967a7a85b5/volumes" Dec 03 18:44:51 crc kubenswrapper[4687]: I1203 18:44:51.671054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xv2xd_e248449e-8a3d-418a-8f0f-0b8484d27c39/control-plane-machine-set-operator/0.log" Dec 03 18:44:51 crc kubenswrapper[4687]: I1203 18:44:51.885769 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q8fqs_bcfb21f2-e1fe-42f0-b166-a2f50847cc6b/kube-rbac-proxy/0.log" Dec 03 18:44:51 crc kubenswrapper[4687]: I1203 18:44:51.936396 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q8fqs_bcfb21f2-e1fe-42f0-b166-a2f50847cc6b/machine-api-operator/0.log" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.199435 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2"] Dec 03 18:45:00 crc kubenswrapper[4687]: E1203 18:45:00.201395 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="extract-utilities" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.201431 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="extract-utilities" Dec 03 18:45:00 crc kubenswrapper[4687]: E1203 18:45:00.201475 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="extract-content" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.201486 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="extract-content" Dec 03 18:45:00 crc kubenswrapper[4687]: E1203 18:45:00.201530 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="registry-server" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.201540 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="registry-server" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.202021 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6976b2-c936-4246-bec9-d1967a7a85b5" containerName="registry-server" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.203035 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.207338 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.208404 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.231298 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2"] Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.296744 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.296858 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.296898 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598jm\" (UniqueName: \"kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.399238 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.399371 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.399417 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598jm\" (UniqueName: \"kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.400208 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.778505 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.781085 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598jm\" (UniqueName: \"kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm\") pod \"collect-profiles-29413125-4kjx2\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:00 crc kubenswrapper[4687]: I1203 18:45:00.833982 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:01 crc kubenswrapper[4687]: I1203 18:45:01.342408 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2"] Dec 03 18:45:02 crc kubenswrapper[4687]: I1203 18:45:02.271354 4687 generic.go:334] "Generic (PLEG): container finished" podID="b0b0aeca-ada9-4c31-b6a3-7f533a714729" containerID="503f5456659f774087df11dd7fb70b15d253568554ff5cbddd7a4fd1d7766b11" exitCode=0 Dec 03 18:45:02 crc kubenswrapper[4687]: I1203 18:45:02.271475 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" event={"ID":"b0b0aeca-ada9-4c31-b6a3-7f533a714729","Type":"ContainerDied","Data":"503f5456659f774087df11dd7fb70b15d253568554ff5cbddd7a4fd1d7766b11"} Dec 03 18:45:02 crc kubenswrapper[4687]: I1203 18:45:02.271666 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" event={"ID":"b0b0aeca-ada9-4c31-b6a3-7f533a714729","Type":"ContainerStarted","Data":"5ec9b84baaf26a84f5b772d94ce269f1e27e7b4bd4b380d1348cde880f18dcac"} Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.613889 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.769343 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598jm\" (UniqueName: \"kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm\") pod \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.769434 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume\") pod \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.769527 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume\") pod \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\" (UID: \"b0b0aeca-ada9-4c31-b6a3-7f533a714729\") " Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.770249 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume" (OuterVolumeSpecName: "config-volume") pod "b0b0aeca-ada9-4c31-b6a3-7f533a714729" (UID: "b0b0aeca-ada9-4c31-b6a3-7f533a714729"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.776216 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm" (OuterVolumeSpecName: "kube-api-access-598jm") pod "b0b0aeca-ada9-4c31-b6a3-7f533a714729" (UID: "b0b0aeca-ada9-4c31-b6a3-7f533a714729"). InnerVolumeSpecName "kube-api-access-598jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.776234 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b0b0aeca-ada9-4c31-b6a3-7f533a714729" (UID: "b0b0aeca-ada9-4c31-b6a3-7f533a714729"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.871761 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598jm\" (UniqueName: \"kubernetes.io/projected/b0b0aeca-ada9-4c31-b6a3-7f533a714729-kube-api-access-598jm\") on node \"crc\" DevicePath \"\"" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.872055 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b0b0aeca-ada9-4c31-b6a3-7f533a714729-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:45:03 crc kubenswrapper[4687]: I1203 18:45:03.872153 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0b0aeca-ada9-4c31-b6a3-7f533a714729-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.151102 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-f2zqs_46718bd5-eda0-473f-ba31-97f2a591fefe/cert-manager-controller/0.log" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.288861 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" event={"ID":"b0b0aeca-ada9-4c31-b6a3-7f533a714729","Type":"ContainerDied","Data":"5ec9b84baaf26a84f5b772d94ce269f1e27e7b4bd4b380d1348cde880f18dcac"} Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.288910 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ec9b84baaf26a84f5b772d94ce269f1e27e7b4bd4b380d1348cde880f18dcac" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.288932 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413125-4kjx2" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.329833 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-pxzt9_91a9f246-dffa-4891-a4b8-91962e0bdbad/cert-manager-cainjector/0.log" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.355355 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-894dz_ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43/cert-manager-webhook/0.log" Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.705286 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55"] Dec 03 18:45:04 crc kubenswrapper[4687]: I1203 18:45:04.713359 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413080-kdc55"] Dec 03 18:45:05 crc kubenswrapper[4687]: I1203 18:45:05.420250 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0887fc-fb17-4743-bdf2-898815992dd9" path="/var/lib/kubelet/pods/ef0887fc-fb17-4743-bdf2-898815992dd9/volumes" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.436642 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-dkrpg_b1bd9d52-1f74-4001-a2ff-c3a84666c686/nmstate-console-plugin/0.log" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.607061 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p2m72_9623c042-2813-4192-a1fc-a92a58364fce/nmstate-handler/0.log" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.648007 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ncxtd_34214bad-1472-4611-9876-d7765279821c/kube-rbac-proxy/0.log" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.665545 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ncxtd_34214bad-1472-4611-9876-d7765279821c/nmstate-metrics/0.log" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.801352 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-l4nkx_d803fc3b-cbaa-4241-870a-7c89982621dd/nmstate-operator/0.log" Dec 03 18:45:16 crc kubenswrapper[4687]: I1203 18:45:16.868480 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-blkh6_b8bf00a4-e266-4c05-bfc5-4121c96f0368/nmstate-webhook/0.log" Dec 03 18:45:32 crc kubenswrapper[4687]: I1203 18:45:32.684437 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-xc95b_bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3/kube-rbac-proxy/0.log" Dec 03 18:45:32 crc kubenswrapper[4687]: I1203 18:45:32.857902 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-xc95b_bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3/controller/0.log" Dec 03 18:45:32 crc kubenswrapper[4687]: I1203 18:45:32.888553 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.070963 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.098689 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.106216 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.114339 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.291196 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.353742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.357654 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.376623 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.561609 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.586644 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/controller/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.589815 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.603748 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.796173 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/kube-rbac-proxy/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.823435 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/frr-metrics/0.log" Dec 03 18:45:33 crc kubenswrapper[4687]: I1203 18:45:33.831703 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/kube-rbac-proxy-frr/0.log" Dec 03 18:45:34 crc kubenswrapper[4687]: I1203 18:45:34.010383 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/reloader/0.log" Dec 03 18:45:34 crc kubenswrapper[4687]: I1203 18:45:34.012155 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-z7q2l_ba8d9037-40bd-4f5b-bd59-139f36424600/frr-k8s-webhook-server/0.log" Dec 03 18:45:34 crc kubenswrapper[4687]: I1203 18:45:34.628138 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-758fc566f8-ssxcf_20b383be-ffda-4db5-8914-c3a22cfb94ec/manager/0.log" Dec 03 18:45:34 crc kubenswrapper[4687]: I1203 18:45:34.857345 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6cfb994ff-8gwcx_3c6529b3-3b9c-4329-8ed7-05431ec4a4bf/webhook-server/0.log" Dec 03 18:45:35 crc kubenswrapper[4687]: I1203 18:45:35.024358 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzhqb_fe83569c-2e40-440d-85fc-764d28429dbf/kube-rbac-proxy/0.log" Dec 03 18:45:35 crc kubenswrapper[4687]: I1203 18:45:35.128787 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/frr/0.log" Dec 03 18:45:35 crc kubenswrapper[4687]: I1203 18:45:35.429707 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzhqb_fe83569c-2e40-440d-85fc-764d28429dbf/speaker/0.log" Dec 03 18:45:41 crc kubenswrapper[4687]: I1203 18:45:41.967640 4687 scope.go:117] "RemoveContainer" containerID="764a517b4f033f44adf30ade85cf18221e64cffebe81eee6342cea2b41c49b5f" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.105968 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.259758 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.318601 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.327093 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.527079 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.552979 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.597415 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/extract/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.757543 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.981350 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:45:48 crc kubenswrapper[4687]: I1203 18:45:48.984026 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.037905 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.163076 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.210510 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.227749 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/extract/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.418155 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.757320 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.759551 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.790979 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:45:49 crc kubenswrapper[4687]: I1203 18:45:49.955892 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.007656 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.194054 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.419167 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/registry-server/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.448720 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.490012 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.561372 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.637650 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.674180 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:45:50 crc kubenswrapper[4687]: I1203 18:45:50.897571 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6bp27_d7aa828b-8739-41ee-bdd4-81f7b5421561/marketplace-operator/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.085809 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.096462 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/registry-server/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.266498 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.295235 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.329210 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.475604 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.478314 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:45:51 crc kubenswrapper[4687]: I1203 18:45:51.610975 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/registry-server/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.509221 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.620758 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.648023 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.674688 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.794349 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.867346 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:45:52 crc kubenswrapper[4687]: I1203 18:45:52.991426 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/registry-server/0.log" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.009174 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:41 crc kubenswrapper[4687]: E1203 18:46:41.010110 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b0aeca-ada9-4c31-b6a3-7f533a714729" containerName="collect-profiles" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.010140 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b0aeca-ada9-4c31-b6a3-7f533a714729" containerName="collect-profiles" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.010308 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b0aeca-ada9-4c31-b6a3-7f533a714729" containerName="collect-profiles" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.011737 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.023046 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.117255 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.117329 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6zss\" (UniqueName: \"kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.117466 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.219268 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.219341 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6zss\" (UniqueName: \"kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.219455 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.219989 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.220162 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.255399 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6zss\" (UniqueName: \"kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss\") pod \"community-operators-6wr5q\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.336825 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:41 crc kubenswrapper[4687]: I1203 18:46:41.869177 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:42 crc kubenswrapper[4687]: I1203 18:46:42.218029 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerID="95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe" exitCode=0 Dec 03 18:46:42 crc kubenswrapper[4687]: I1203 18:46:42.218132 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerDied","Data":"95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe"} Dec 03 18:46:42 crc kubenswrapper[4687]: I1203 18:46:42.218351 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerStarted","Data":"e47f782c05e10583e037d06b1bd8855f334a4c0d094d5a91b7297e2762c81eaf"} Dec 03 18:46:44 crc kubenswrapper[4687]: I1203 18:46:44.112735 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:46:44 crc kubenswrapper[4687]: I1203 18:46:44.114682 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:46:44 crc kubenswrapper[4687]: I1203 18:46:44.251817 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerStarted","Data":"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba"} Dec 03 18:46:45 crc kubenswrapper[4687]: I1203 18:46:45.262803 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerID="469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba" exitCode=0 Dec 03 18:46:45 crc kubenswrapper[4687]: I1203 18:46:45.262856 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerDied","Data":"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba"} Dec 03 18:46:45 crc kubenswrapper[4687]: I1203 18:46:45.263201 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerStarted","Data":"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446"} Dec 03 18:46:45 crc kubenswrapper[4687]: I1203 18:46:45.283812 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6wr5q" podStartSLOduration=2.876671333 podStartE2EDuration="5.283794853s" podCreationTimestamp="2025-12-03 18:46:40 +0000 UTC" firstStartedPulling="2025-12-03 18:46:42.219803438 +0000 UTC m=+4035.110498871" lastFinishedPulling="2025-12-03 18:46:44.626926968 +0000 UTC m=+4037.517622391" observedRunningTime="2025-12-03 18:46:45.282873699 +0000 UTC m=+4038.173569132" watchObservedRunningTime="2025-12-03 18:46:45.283794853 +0000 UTC m=+4038.174490286" Dec 03 18:46:51 crc kubenswrapper[4687]: I1203 18:46:51.337713 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:51 crc kubenswrapper[4687]: I1203 18:46:51.338322 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:51 crc kubenswrapper[4687]: I1203 18:46:51.389193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:52 crc kubenswrapper[4687]: I1203 18:46:52.382148 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:52 crc kubenswrapper[4687]: I1203 18:46:52.440134 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:54 crc kubenswrapper[4687]: I1203 18:46:54.348924 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6wr5q" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="registry-server" containerID="cri-o://e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446" gracePeriod=2 Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.335529 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.371595 4687 generic.go:334] "Generic (PLEG): container finished" podID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerID="e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446" exitCode=0 Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.371631 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerDied","Data":"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446"} Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.371660 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wr5q" event={"ID":"a5567c1b-6fc9-4513-84e4-d606a90a853f","Type":"ContainerDied","Data":"e47f782c05e10583e037d06b1bd8855f334a4c0d094d5a91b7297e2762c81eaf"} Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.371677 4687 scope.go:117] "RemoveContainer" containerID="e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.371790 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wr5q" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.400528 4687 scope.go:117] "RemoveContainer" containerID="469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.402076 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content\") pod \"a5567c1b-6fc9-4513-84e4-d606a90a853f\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.402210 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6zss\" (UniqueName: \"kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss\") pod \"a5567c1b-6fc9-4513-84e4-d606a90a853f\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.402232 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities\") pod \"a5567c1b-6fc9-4513-84e4-d606a90a853f\" (UID: \"a5567c1b-6fc9-4513-84e4-d606a90a853f\") " Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.402996 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities" (OuterVolumeSpecName: "utilities") pod "a5567c1b-6fc9-4513-84e4-d606a90a853f" (UID: "a5567c1b-6fc9-4513-84e4-d606a90a853f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.412576 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss" (OuterVolumeSpecName: "kube-api-access-c6zss") pod "a5567c1b-6fc9-4513-84e4-d606a90a853f" (UID: "a5567c1b-6fc9-4513-84e4-d606a90a853f"). InnerVolumeSpecName "kube-api-access-c6zss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.424488 4687 scope.go:117] "RemoveContainer" containerID="95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.465673 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5567c1b-6fc9-4513-84e4-d606a90a853f" (UID: "a5567c1b-6fc9-4513-84e4-d606a90a853f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.506114 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.506425 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6zss\" (UniqueName: \"kubernetes.io/projected/a5567c1b-6fc9-4513-84e4-d606a90a853f-kube-api-access-c6zss\") on node \"crc\" DevicePath \"\"" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.506445 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5567c1b-6fc9-4513-84e4-d606a90a853f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.511550 4687 scope.go:117] "RemoveContainer" containerID="e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446" Dec 03 18:46:55 crc kubenswrapper[4687]: E1203 18:46:55.512232 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446\": container with ID starting with e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446 not found: ID does not exist" containerID="e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.512277 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446"} err="failed to get container status \"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446\": rpc error: code = NotFound desc = could not find container \"e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446\": container with ID starting with e075a19af9d74e7caf3c6d021ae19a9f33cec88c066b639392efa11c56b9e446 not found: ID does not exist" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.512311 4687 scope.go:117] "RemoveContainer" containerID="469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba" Dec 03 18:46:55 crc kubenswrapper[4687]: E1203 18:46:55.512873 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba\": container with ID starting with 469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba not found: ID does not exist" containerID="469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.512905 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba"} err="failed to get container status \"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba\": rpc error: code = NotFound desc = could not find container \"469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba\": container with ID starting with 469bbcee6992755ea80496fbc36c3b9cd25928a79de04b6475f1e6b6453f3fba not found: ID does not exist" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.512925 4687 scope.go:117] "RemoveContainer" containerID="95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe" Dec 03 18:46:55 crc kubenswrapper[4687]: E1203 18:46:55.513254 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe\": container with ID starting with 95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe not found: ID does not exist" containerID="95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.513285 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe"} err="failed to get container status \"95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe\": rpc error: code = NotFound desc = could not find container \"95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe\": container with ID starting with 95a165431d32e388773b18567b7750657bce8be161b3254750bcfc867d27debe not found: ID does not exist" Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.720408 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:55 crc kubenswrapper[4687]: I1203 18:46:55.735333 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6wr5q"] Dec 03 18:46:57 crc kubenswrapper[4687]: I1203 18:46:57.424242 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" path="/var/lib/kubelet/pods/a5567c1b-6fc9-4513-84e4-d606a90a853f/volumes" Dec 03 18:47:14 crc kubenswrapper[4687]: I1203 18:47:14.111909 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:47:14 crc kubenswrapper[4687]: I1203 18:47:14.112571 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.736815 4687 generic.go:334] "Generic (PLEG): container finished" podID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerID="6d974b287b097543f59893f06145c8930f4d0eaa421a2f1f8dc4b8ec037135a6" exitCode=0 Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.736920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" event={"ID":"2e73d5dc-2b2f-46c3-a78b-3387644a03c0","Type":"ContainerDied","Data":"6d974b287b097543f59893f06145c8930f4d0eaa421a2f1f8dc4b8ec037135a6"} Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.737957 4687 scope.go:117] "RemoveContainer" containerID="6d974b287b097543f59893f06145c8930f4d0eaa421a2f1f8dc4b8ec037135a6" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.917744 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:34 crc kubenswrapper[4687]: E1203 18:47:34.918349 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="extract-content" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.918362 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="extract-content" Dec 03 18:47:34 crc kubenswrapper[4687]: E1203 18:47:34.918380 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="extract-utilities" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.918389 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="extract-utilities" Dec 03 18:47:34 crc kubenswrapper[4687]: E1203 18:47:34.918405 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="registry-server" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.918411 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="registry-server" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.918593 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5567c1b-6fc9-4513-84e4-d606a90a853f" containerName="registry-server" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.920002 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:34 crc kubenswrapper[4687]: I1203 18:47:34.962302 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.106326 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.106400 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.106482 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.208406 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.208550 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.208588 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.209029 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.209036 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.232750 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8\") pod \"certified-operators-cpl46\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.242096 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.437377 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zgs8_must-gather-9dcnt_2e73d5dc-2b2f-46c3-a78b-3387644a03c0/gather/0.log" Dec 03 18:47:35 crc kubenswrapper[4687]: I1203 18:47:35.835614 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:36 crc kubenswrapper[4687]: I1203 18:47:36.764345 4687 generic.go:334] "Generic (PLEG): container finished" podID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerID="b275bacc2474646e8fb59ac65d37bde0ab4f6919d6a1d0f0253d84cfd699d71f" exitCode=0 Dec 03 18:47:36 crc kubenswrapper[4687]: I1203 18:47:36.764464 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerDied","Data":"b275bacc2474646e8fb59ac65d37bde0ab4f6919d6a1d0f0253d84cfd699d71f"} Dec 03 18:47:36 crc kubenswrapper[4687]: I1203 18:47:36.764615 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerStarted","Data":"67e3247d6e90e057e8080425ca72d960bd76b74a670e9b77a06c6d64ae20d74c"} Dec 03 18:47:38 crc kubenswrapper[4687]: I1203 18:47:38.785513 4687 generic.go:334] "Generic (PLEG): container finished" podID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerID="cda2527fa37d9f0d3695111bc92d2806c4484383ed88c50349ec176c7f59f131" exitCode=0 Dec 03 18:47:38 crc kubenswrapper[4687]: I1203 18:47:38.785824 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerDied","Data":"cda2527fa37d9f0d3695111bc92d2806c4484383ed88c50349ec176c7f59f131"} Dec 03 18:47:39 crc kubenswrapper[4687]: I1203 18:47:39.797623 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerStarted","Data":"8042ddc010fa95617a53288e8e628cd27260d721e1963d6a1fed1eb1bfb839c3"} Dec 03 18:47:39 crc kubenswrapper[4687]: I1203 18:47:39.815479 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cpl46" podStartSLOduration=3.05914098 podStartE2EDuration="5.815460097s" podCreationTimestamp="2025-12-03 18:47:34 +0000 UTC" firstStartedPulling="2025-12-03 18:47:36.766966878 +0000 UTC m=+4089.657662301" lastFinishedPulling="2025-12-03 18:47:39.523285985 +0000 UTC m=+4092.413981418" observedRunningTime="2025-12-03 18:47:39.813754651 +0000 UTC m=+4092.704450074" watchObservedRunningTime="2025-12-03 18:47:39.815460097 +0000 UTC m=+4092.706155530" Dec 03 18:47:43 crc kubenswrapper[4687]: I1203 18:47:43.124822 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zgs8/must-gather-9dcnt"] Dec 03 18:47:43 crc kubenswrapper[4687]: I1203 18:47:43.125519 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="copy" containerID="cri-o://43afa43b247d32c9317057d3cdc4ec999a69f4488d81f3431b5c84218e215822" gracePeriod=2 Dec 03 18:47:43 crc kubenswrapper[4687]: I1203 18:47:43.133354 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zgs8/must-gather-9dcnt"] Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.111931 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.112211 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.112262 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.113032 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.113089 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99" gracePeriod=600 Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.841580 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zgs8_must-gather-9dcnt_2e73d5dc-2b2f-46c3-a78b-3387644a03c0/copy/0.log" Dec 03 18:47:44 crc kubenswrapper[4687]: I1203 18:47:44.842086 4687 generic.go:334] "Generic (PLEG): container finished" podID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerID="43afa43b247d32c9317057d3cdc4ec999a69f4488d81f3431b5c84218e215822" exitCode=143 Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.242781 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.243225 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.540756 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.587088 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zgs8_must-gather-9dcnt_2e73d5dc-2b2f-46c3-a78b-3387644a03c0/copy/0.log" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.591596 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.725183 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n4fv\" (UniqueName: \"kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv\") pod \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.725246 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output\") pod \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\" (UID: \"2e73d5dc-2b2f-46c3-a78b-3387644a03c0\") " Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.762191 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv" (OuterVolumeSpecName: "kube-api-access-9n4fv") pod "2e73d5dc-2b2f-46c3-a78b-3387644a03c0" (UID: "2e73d5dc-2b2f-46c3-a78b-3387644a03c0"). InnerVolumeSpecName "kube-api-access-9n4fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.829777 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n4fv\" (UniqueName: \"kubernetes.io/projected/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-kube-api-access-9n4fv\") on node \"crc\" DevicePath \"\"" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.879380 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zgs8_must-gather-9dcnt_2e73d5dc-2b2f-46c3-a78b-3387644a03c0/copy/0.log" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.881363 4687 scope.go:117] "RemoveContainer" containerID="43afa43b247d32c9317057d3cdc4ec999a69f4488d81f3431b5c84218e215822" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.881438 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zgs8/must-gather-9dcnt" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.886097 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99" exitCode=0 Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.886146 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99"} Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.902079 4687 scope.go:117] "RemoveContainer" containerID="6d974b287b097543f59893f06145c8930f4d0eaa421a2f1f8dc4b8ec037135a6" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.927805 4687 scope.go:117] "RemoveContainer" containerID="d856183790e2889b9ffbc293e15fa38dbff83b38c080a244ffaaddc637d603c8" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.939830 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.941615 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2e73d5dc-2b2f-46c3-a78b-3387644a03c0" (UID: "2e73d5dc-2b2f-46c3-a78b-3387644a03c0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:47:45 crc kubenswrapper[4687]: I1203 18:47:45.997734 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:46 crc kubenswrapper[4687]: I1203 18:47:46.033018 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2e73d5dc-2b2f-46c3-a78b-3387644a03c0-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 18:47:46 crc kubenswrapper[4687]: I1203 18:47:46.898266 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f"} Dec 03 18:47:47 crc kubenswrapper[4687]: I1203 18:47:47.419774 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" path="/var/lib/kubelet/pods/2e73d5dc-2b2f-46c3-a78b-3387644a03c0/volumes" Dec 03 18:47:47 crc kubenswrapper[4687]: I1203 18:47:47.906344 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cpl46" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="registry-server" containerID="cri-o://8042ddc010fa95617a53288e8e628cd27260d721e1963d6a1fed1eb1bfb839c3" gracePeriod=2 Dec 03 18:47:48 crc kubenswrapper[4687]: I1203 18:47:48.915561 4687 generic.go:334] "Generic (PLEG): container finished" podID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerID="8042ddc010fa95617a53288e8e628cd27260d721e1963d6a1fed1eb1bfb839c3" exitCode=0 Dec 03 18:47:48 crc kubenswrapper[4687]: I1203 18:47:48.915624 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerDied","Data":"8042ddc010fa95617a53288e8e628cd27260d721e1963d6a1fed1eb1bfb839c3"} Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.394409 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.495380 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8\") pod \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.495461 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities\") pod \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.495528 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content\") pod \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\" (UID: \"6b570051-4b9f-4f57-b0cc-7ab6f6a00410\") " Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.496583 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities" (OuterVolumeSpecName: "utilities") pod "6b570051-4b9f-4f57-b0cc-7ab6f6a00410" (UID: "6b570051-4b9f-4f57-b0cc-7ab6f6a00410"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.505486 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8" (OuterVolumeSpecName: "kube-api-access-hwdm8") pod "6b570051-4b9f-4f57-b0cc-7ab6f6a00410" (UID: "6b570051-4b9f-4f57-b0cc-7ab6f6a00410"). InnerVolumeSpecName "kube-api-access-hwdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.552932 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b570051-4b9f-4f57-b0cc-7ab6f6a00410" (UID: "6b570051-4b9f-4f57-b0cc-7ab6f6a00410"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.597050 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwdm8\" (UniqueName: \"kubernetes.io/projected/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-kube-api-access-hwdm8\") on node \"crc\" DevicePath \"\"" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.597075 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.597085 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b570051-4b9f-4f57-b0cc-7ab6f6a00410-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.929773 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpl46" event={"ID":"6b570051-4b9f-4f57-b0cc-7ab6f6a00410","Type":"ContainerDied","Data":"67e3247d6e90e057e8080425ca72d960bd76b74a670e9b77a06c6d64ae20d74c"} Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.929851 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpl46" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.930070 4687 scope.go:117] "RemoveContainer" containerID="8042ddc010fa95617a53288e8e628cd27260d721e1963d6a1fed1eb1bfb839c3" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.949537 4687 scope.go:117] "RemoveContainer" containerID="cda2527fa37d9f0d3695111bc92d2806c4484383ed88c50349ec176c7f59f131" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.984250 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.989790 4687 scope.go:117] "RemoveContainer" containerID="b275bacc2474646e8fb59ac65d37bde0ab4f6919d6a1d0f0253d84cfd699d71f" Dec 03 18:47:49 crc kubenswrapper[4687]: I1203 18:47:49.993413 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cpl46"] Dec 03 18:47:51 crc kubenswrapper[4687]: I1203 18:47:51.419686 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" path="/var/lib/kubelet/pods/6b570051-4b9f-4f57-b0cc-7ab6f6a00410/volumes" Dec 03 18:50:14 crc kubenswrapper[4687]: I1203 18:50:14.111658 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:50:14 crc kubenswrapper[4687]: I1203 18:50:14.112214 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.063610 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:25 crc kubenswrapper[4687]: E1203 18:50:25.064542 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="extract-content" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064559 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="extract-content" Dec 03 18:50:25 crc kubenswrapper[4687]: E1203 18:50:25.064571 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="gather" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064576 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="gather" Dec 03 18:50:25 crc kubenswrapper[4687]: E1203 18:50:25.064592 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="extract-utilities" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064599 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="extract-utilities" Dec 03 18:50:25 crc kubenswrapper[4687]: E1203 18:50:25.064616 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="registry-server" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064622 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="registry-server" Dec 03 18:50:25 crc kubenswrapper[4687]: E1203 18:50:25.064634 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="copy" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064640 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="copy" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064831 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="copy" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064876 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b570051-4b9f-4f57-b0cc-7ab6f6a00410" containerName="registry-server" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.064895 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e73d5dc-2b2f-46c3-a78b-3387644a03c0" containerName="gather" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.066393 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.088791 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.126681 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzlxf\" (UniqueName: \"kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.126745 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.126947 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.229258 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.229382 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzlxf\" (UniqueName: \"kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.229408 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.229720 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.229790 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.281028 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzlxf\" (UniqueName: \"kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf\") pod \"redhat-marketplace-2vffz\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.413017 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:25 crc kubenswrapper[4687]: I1203 18:50:25.883729 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:26 crc kubenswrapper[4687]: I1203 18:50:26.444492 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerID="5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d" exitCode=0 Dec 03 18:50:26 crc kubenswrapper[4687]: I1203 18:50:26.444590 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerDied","Data":"5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d"} Dec 03 18:50:26 crc kubenswrapper[4687]: I1203 18:50:26.444843 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerStarted","Data":"1b8f2a7dd606956ded305e43145e9a85c24f5d6c8f2a5012cc31b33ec5fe3077"} Dec 03 18:50:26 crc kubenswrapper[4687]: I1203 18:50:26.446953 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:50:28 crc kubenswrapper[4687]: I1203 18:50:28.465080 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerID="94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f" exitCode=0 Dec 03 18:50:28 crc kubenswrapper[4687]: I1203 18:50:28.465190 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerDied","Data":"94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f"} Dec 03 18:50:30 crc kubenswrapper[4687]: I1203 18:50:30.485519 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerStarted","Data":"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575"} Dec 03 18:50:30 crc kubenswrapper[4687]: I1203 18:50:30.509947 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2vffz" podStartSLOduration=2.478870757 podStartE2EDuration="5.509924199s" podCreationTimestamp="2025-12-03 18:50:25 +0000 UTC" firstStartedPulling="2025-12-03 18:50:26.446590541 +0000 UTC m=+4259.337285974" lastFinishedPulling="2025-12-03 18:50:29.477643983 +0000 UTC m=+4262.368339416" observedRunningTime="2025-12-03 18:50:30.501673608 +0000 UTC m=+4263.392369041" watchObservedRunningTime="2025-12-03 18:50:30.509924199 +0000 UTC m=+4263.400619632" Dec 03 18:50:35 crc kubenswrapper[4687]: I1203 18:50:35.426938 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:35 crc kubenswrapper[4687]: I1203 18:50:35.427536 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:35 crc kubenswrapper[4687]: I1203 18:50:35.501597 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:35 crc kubenswrapper[4687]: I1203 18:50:35.606349 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:35 crc kubenswrapper[4687]: I1203 18:50:35.736075 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:37 crc kubenswrapper[4687]: I1203 18:50:37.557755 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2vffz" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="registry-server" containerID="cri-o://46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575" gracePeriod=2 Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.094087 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.204737 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzlxf\" (UniqueName: \"kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf\") pod \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.204920 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content\") pod \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.205006 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities\") pod \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\" (UID: \"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f\") " Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.206339 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities" (OuterVolumeSpecName: "utilities") pod "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" (UID: "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.209867 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf" (OuterVolumeSpecName: "kube-api-access-vzlxf") pod "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" (UID: "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f"). InnerVolumeSpecName "kube-api-access-vzlxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.225984 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" (UID: "2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.307724 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.307787 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzlxf\" (UniqueName: \"kubernetes.io/projected/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-kube-api-access-vzlxf\") on node \"crc\" DevicePath \"\"" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.307807 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.567925 4687 generic.go:334] "Generic (PLEG): container finished" podID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerID="46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575" exitCode=0 Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.567968 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerDied","Data":"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575"} Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.567995 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vffz" event={"ID":"2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f","Type":"ContainerDied","Data":"1b8f2a7dd606956ded305e43145e9a85c24f5d6c8f2a5012cc31b33ec5fe3077"} Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.568011 4687 scope.go:117] "RemoveContainer" containerID="46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.568014 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vffz" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.593301 4687 scope.go:117] "RemoveContainer" containerID="94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f" Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.627139 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:38 crc kubenswrapper[4687]: I1203 18:50:38.636437 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vffz"] Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.007342 4687 scope.go:117] "RemoveContainer" containerID="5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.042296 4687 scope.go:117] "RemoveContainer" containerID="46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575" Dec 03 18:50:39 crc kubenswrapper[4687]: E1203 18:50:39.043219 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575\": container with ID starting with 46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575 not found: ID does not exist" containerID="46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.043246 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575"} err="failed to get container status \"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575\": rpc error: code = NotFound desc = could not find container \"46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575\": container with ID starting with 46184facb08cf31870b3ac777df55cb417bd7a0b040f63c774cdde708fa4c575 not found: ID does not exist" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.043265 4687 scope.go:117] "RemoveContainer" containerID="94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f" Dec 03 18:50:39 crc kubenswrapper[4687]: E1203 18:50:39.043475 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f\": container with ID starting with 94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f not found: ID does not exist" containerID="94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.043505 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f"} err="failed to get container status \"94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f\": rpc error: code = NotFound desc = could not find container \"94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f\": container with ID starting with 94e9aa3c5aca8bf3c8b3655d7d6193c61d28812714602a517b330e86b6dda41f not found: ID does not exist" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.043526 4687 scope.go:117] "RemoveContainer" containerID="5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d" Dec 03 18:50:39 crc kubenswrapper[4687]: E1203 18:50:39.043763 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d\": container with ID starting with 5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d not found: ID does not exist" containerID="5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.043792 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d"} err="failed to get container status \"5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d\": rpc error: code = NotFound desc = could not find container \"5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d\": container with ID starting with 5e1e12da6f97dd876b24f7a6f6d79060a58615398895b4f5a3d677612eba777d not found: ID does not exist" Dec 03 18:50:39 crc kubenswrapper[4687]: I1203 18:50:39.418954 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" path="/var/lib/kubelet/pods/2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f/volumes" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.274249 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfplk/must-gather-fzjdz"] Dec 03 18:50:40 crc kubenswrapper[4687]: E1203 18:50:40.275007 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="registry-server" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.275023 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="registry-server" Dec 03 18:50:40 crc kubenswrapper[4687]: E1203 18:50:40.275067 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="extract-content" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.275077 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="extract-content" Dec 03 18:50:40 crc kubenswrapper[4687]: E1203 18:50:40.275086 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="extract-utilities" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.275096 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="extract-utilities" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.275324 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4be59f-6e4d-41e2-a49d-3e7137bbcf8f" containerName="registry-server" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.276557 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.280664 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tfplk"/"openshift-service-ca.crt" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.281223 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tfplk"/"kube-root-ca.crt" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.296604 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tfplk/must-gather-fzjdz"] Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.344788 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.344886 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgw88\" (UniqueName: \"kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.446702 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgw88\" (UniqueName: \"kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.446839 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.447248 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.481796 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgw88\" (UniqueName: \"kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88\") pod \"must-gather-fzjdz\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:40 crc kubenswrapper[4687]: I1203 18:50:40.594200 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:50:41 crc kubenswrapper[4687]: I1203 18:50:41.128987 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tfplk/must-gather-fzjdz"] Dec 03 18:50:41 crc kubenswrapper[4687]: I1203 18:50:41.616480 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/must-gather-fzjdz" event={"ID":"e72a3ed1-15bc-4362-a4fb-bd912e4d619d","Type":"ContainerStarted","Data":"98971cb93c420c8d27a4aba8edf4cbcbf7173b7699bde2e0f2f76193f5fbc2d8"} Dec 03 18:50:42 crc kubenswrapper[4687]: I1203 18:50:42.626368 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/must-gather-fzjdz" event={"ID":"e72a3ed1-15bc-4362-a4fb-bd912e4d619d","Type":"ContainerStarted","Data":"7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e"} Dec 03 18:50:42 crc kubenswrapper[4687]: I1203 18:50:42.626710 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/must-gather-fzjdz" event={"ID":"e72a3ed1-15bc-4362-a4fb-bd912e4d619d","Type":"ContainerStarted","Data":"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b"} Dec 03 18:50:43 crc kubenswrapper[4687]: I1203 18:50:43.654665 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tfplk/must-gather-fzjdz" podStartSLOduration=3.654636921 podStartE2EDuration="3.654636921s" podCreationTimestamp="2025-12-03 18:50:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:50:43.651424194 +0000 UTC m=+4276.542119627" watchObservedRunningTime="2025-12-03 18:50:43.654636921 +0000 UTC m=+4276.545332374" Dec 03 18:50:44 crc kubenswrapper[4687]: I1203 18:50:44.111191 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:50:44 crc kubenswrapper[4687]: I1203 18:50:44.111472 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.384905 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfplk/crc-debug-l4zxf"] Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.386756 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.388700 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tfplk"/"default-dockercfg-d7729" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.544857 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.545016 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6vp\" (UniqueName: \"kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.646631 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.646750 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj6vp\" (UniqueName: \"kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.646797 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.665730 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj6vp\" (UniqueName: \"kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp\") pod \"crc-debug-l4zxf\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: I1203 18:50:45.704447 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:50:45 crc kubenswrapper[4687]: W1203 18:50:45.761386 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda52ca102_5a3a_42cc_8b5a_4298d3dae88a.slice/crio-f07627b5bce5d03ef392ddd16ed17b3e369aa6f39c45477f1d7b52b2ca61408f WatchSource:0}: Error finding container f07627b5bce5d03ef392ddd16ed17b3e369aa6f39c45477f1d7b52b2ca61408f: Status 404 returned error can't find the container with id f07627b5bce5d03ef392ddd16ed17b3e369aa6f39c45477f1d7b52b2ca61408f Dec 03 18:50:46 crc kubenswrapper[4687]: I1203 18:50:46.664474 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" event={"ID":"a52ca102-5a3a-42cc-8b5a-4298d3dae88a","Type":"ContainerStarted","Data":"af84cc96d2cf9df998083c47a3f4753c86f57e4b579a966e880e300afa7cd878"} Dec 03 18:50:46 crc kubenswrapper[4687]: I1203 18:50:46.665159 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" event={"ID":"a52ca102-5a3a-42cc-8b5a-4298d3dae88a","Type":"ContainerStarted","Data":"f07627b5bce5d03ef392ddd16ed17b3e369aa6f39c45477f1d7b52b2ca61408f"} Dec 03 18:50:46 crc kubenswrapper[4687]: I1203 18:50:46.700317 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" podStartSLOduration=1.700296644 podStartE2EDuration="1.700296644s" podCreationTimestamp="2025-12-03 18:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 18:50:46.684994283 +0000 UTC m=+4279.575689716" watchObservedRunningTime="2025-12-03 18:50:46.700296644 +0000 UTC m=+4279.590992077" Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.111272 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.111766 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.111804 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.112519 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.112744 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" gracePeriod=600 Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.910786 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" exitCode=0 Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.911196 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f"} Dec 03 18:51:14 crc kubenswrapper[4687]: I1203 18:51:14.911238 4687 scope.go:117] "RemoveContainer" containerID="4d9cf49ef51a55348040cd2616b9e1c904faf09e72a731278a4ed38853d9ee99" Dec 03 18:51:15 crc kubenswrapper[4687]: E1203 18:51:15.060633 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:51:15 crc kubenswrapper[4687]: I1203 18:51:15.922468 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:51:15 crc kubenswrapper[4687]: E1203 18:51:15.922724 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:51:26 crc kubenswrapper[4687]: I1203 18:51:26.408006 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:51:26 crc kubenswrapper[4687]: E1203 18:51:26.408769 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:51:30 crc kubenswrapper[4687]: I1203 18:51:30.048214 4687 generic.go:334] "Generic (PLEG): container finished" podID="a52ca102-5a3a-42cc-8b5a-4298d3dae88a" containerID="af84cc96d2cf9df998083c47a3f4753c86f57e4b579a966e880e300afa7cd878" exitCode=0 Dec 03 18:51:30 crc kubenswrapper[4687]: I1203 18:51:30.048297 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" event={"ID":"a52ca102-5a3a-42cc-8b5a-4298d3dae88a","Type":"ContainerDied","Data":"af84cc96d2cf9df998083c47a3f4753c86f57e4b579a966e880e300afa7cd878"} Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.173636 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.236148 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-l4zxf"] Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.247297 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-l4zxf"] Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.274081 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj6vp\" (UniqueName: \"kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp\") pod \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.274495 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host\") pod \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\" (UID: \"a52ca102-5a3a-42cc-8b5a-4298d3dae88a\") " Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.274634 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host" (OuterVolumeSpecName: "host") pod "a52ca102-5a3a-42cc-8b5a-4298d3dae88a" (UID: "a52ca102-5a3a-42cc-8b5a-4298d3dae88a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.275328 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.279267 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp" (OuterVolumeSpecName: "kube-api-access-dj6vp") pod "a52ca102-5a3a-42cc-8b5a-4298d3dae88a" (UID: "a52ca102-5a3a-42cc-8b5a-4298d3dae88a"). InnerVolumeSpecName "kube-api-access-dj6vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.377450 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj6vp\" (UniqueName: \"kubernetes.io/projected/a52ca102-5a3a-42cc-8b5a-4298d3dae88a-kube-api-access-dj6vp\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:31 crc kubenswrapper[4687]: I1203 18:51:31.417845 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52ca102-5a3a-42cc-8b5a-4298d3dae88a" path="/var/lib/kubelet/pods/a52ca102-5a3a-42cc-8b5a-4298d3dae88a/volumes" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.072828 4687 scope.go:117] "RemoveContainer" containerID="af84cc96d2cf9df998083c47a3f4753c86f57e4b579a966e880e300afa7cd878" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.072850 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-l4zxf" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.391055 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfplk/crc-debug-9s7tg"] Dec 03 18:51:32 crc kubenswrapper[4687]: E1203 18:51:32.391781 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52ca102-5a3a-42cc-8b5a-4298d3dae88a" containerName="container-00" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.391795 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52ca102-5a3a-42cc-8b5a-4298d3dae88a" containerName="container-00" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.392007 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52ca102-5a3a-42cc-8b5a-4298d3dae88a" containerName="container-00" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.392652 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.395781 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tfplk"/"default-dockercfg-d7729" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.497849 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrpn\" (UniqueName: \"kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.497937 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.599300 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrpn\" (UniqueName: \"kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.599671 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.599918 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.618963 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrpn\" (UniqueName: \"kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn\") pod \"crc-debug-9s7tg\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:32 crc kubenswrapper[4687]: I1203 18:51:32.708563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:33 crc kubenswrapper[4687]: I1203 18:51:33.085478 4687 generic.go:334] "Generic (PLEG): container finished" podID="ac5a9051-acff-4781-a7a6-bab70bfbb6d8" containerID="200fe29ec6b1383a7ccb58448262cd8f200fd8cef3c9d173bd3de2f4c4eb2896" exitCode=0 Dec 03 18:51:33 crc kubenswrapper[4687]: I1203 18:51:33.085809 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" event={"ID":"ac5a9051-acff-4781-a7a6-bab70bfbb6d8","Type":"ContainerDied","Data":"200fe29ec6b1383a7ccb58448262cd8f200fd8cef3c9d173bd3de2f4c4eb2896"} Dec 03 18:51:33 crc kubenswrapper[4687]: I1203 18:51:33.085860 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" event={"ID":"ac5a9051-acff-4781-a7a6-bab70bfbb6d8","Type":"ContainerStarted","Data":"0c285f8322de1067396cac3f10b8d37a52a38eaeeb1479c6b3e0e445ac8efa47"} Dec 03 18:51:33 crc kubenswrapper[4687]: I1203 18:51:33.563933 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-9s7tg"] Dec 03 18:51:33 crc kubenswrapper[4687]: I1203 18:51:33.571491 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-9s7tg"] Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.209407 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.243977 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host\") pod \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.244079 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host" (OuterVolumeSpecName: "host") pod "ac5a9051-acff-4781-a7a6-bab70bfbb6d8" (UID: "ac5a9051-acff-4781-a7a6-bab70bfbb6d8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.244140 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrpn\" (UniqueName: \"kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn\") pod \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\" (UID: \"ac5a9051-acff-4781-a7a6-bab70bfbb6d8\") " Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.244448 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.250356 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn" (OuterVolumeSpecName: "kube-api-access-9nrpn") pod "ac5a9051-acff-4781-a7a6-bab70bfbb6d8" (UID: "ac5a9051-acff-4781-a7a6-bab70bfbb6d8"). InnerVolumeSpecName "kube-api-access-9nrpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.345593 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrpn\" (UniqueName: \"kubernetes.io/projected/ac5a9051-acff-4781-a7a6-bab70bfbb6d8-kube-api-access-9nrpn\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.792778 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tfplk/crc-debug-2xjs5"] Dec 03 18:51:34 crc kubenswrapper[4687]: E1203 18:51:34.793565 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5a9051-acff-4781-a7a6-bab70bfbb6d8" containerName="container-00" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.793585 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5a9051-acff-4781-a7a6-bab70bfbb6d8" containerName="container-00" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.793834 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5a9051-acff-4781-a7a6-bab70bfbb6d8" containerName="container-00" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.794563 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.853680 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fks7x\" (UniqueName: \"kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.853806 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.955461 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fks7x\" (UniqueName: \"kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.955532 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.955664 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:34 crc kubenswrapper[4687]: I1203 18:51:34.973772 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fks7x\" (UniqueName: \"kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x\") pod \"crc-debug-2xjs5\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:35 crc kubenswrapper[4687]: I1203 18:51:35.105295 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c285f8322de1067396cac3f10b8d37a52a38eaeeb1479c6b3e0e445ac8efa47" Dec 03 18:51:35 crc kubenswrapper[4687]: I1203 18:51:35.105370 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-9s7tg" Dec 03 18:51:35 crc kubenswrapper[4687]: I1203 18:51:35.111167 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:35 crc kubenswrapper[4687]: W1203 18:51:35.137590 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e07d73_3c76_4811_ace7_42d1efbfe29b.slice/crio-e787e5fa55c67e2e611099afe43aafe83d1432ee75ac3202c82e398a15082231 WatchSource:0}: Error finding container e787e5fa55c67e2e611099afe43aafe83d1432ee75ac3202c82e398a15082231: Status 404 returned error can't find the container with id e787e5fa55c67e2e611099afe43aafe83d1432ee75ac3202c82e398a15082231 Dec 03 18:51:35 crc kubenswrapper[4687]: I1203 18:51:35.419709 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5a9051-acff-4781-a7a6-bab70bfbb6d8" path="/var/lib/kubelet/pods/ac5a9051-acff-4781-a7a6-bab70bfbb6d8/volumes" Dec 03 18:51:36 crc kubenswrapper[4687]: I1203 18:51:36.120830 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" event={"ID":"e0e07d73-3c76-4811-ace7-42d1efbfe29b","Type":"ContainerStarted","Data":"e787e5fa55c67e2e611099afe43aafe83d1432ee75ac3202c82e398a15082231"} Dec 03 18:51:37 crc kubenswrapper[4687]: I1203 18:51:37.130632 4687 generic.go:334] "Generic (PLEG): container finished" podID="e0e07d73-3c76-4811-ace7-42d1efbfe29b" containerID="41893c5643b87c897e42bb4ca98169562e8dd0227100cc2bd81495c59ff7a5e7" exitCode=0 Dec 03 18:51:37 crc kubenswrapper[4687]: I1203 18:51:37.130688 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" event={"ID":"e0e07d73-3c76-4811-ace7-42d1efbfe29b","Type":"ContainerDied","Data":"41893c5643b87c897e42bb4ca98169562e8dd0227100cc2bd81495c59ff7a5e7"} Dec 03 18:51:37 crc kubenswrapper[4687]: I1203 18:51:37.185350 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-2xjs5"] Dec 03 18:51:37 crc kubenswrapper[4687]: I1203 18:51:37.194060 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfplk/crc-debug-2xjs5"] Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.407500 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:51:38 crc kubenswrapper[4687]: E1203 18:51:38.408014 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.710791 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.823788 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fks7x\" (UniqueName: \"kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x\") pod \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.824223 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host\") pod \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\" (UID: \"e0e07d73-3c76-4811-ace7-42d1efbfe29b\") " Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.826398 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host" (OuterVolumeSpecName: "host") pod "e0e07d73-3c76-4811-ace7-42d1efbfe29b" (UID: "e0e07d73-3c76-4811-ace7-42d1efbfe29b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.847365 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x" (OuterVolumeSpecName: "kube-api-access-fks7x") pod "e0e07d73-3c76-4811-ace7-42d1efbfe29b" (UID: "e0e07d73-3c76-4811-ace7-42d1efbfe29b"). InnerVolumeSpecName "kube-api-access-fks7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.927322 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fks7x\" (UniqueName: \"kubernetes.io/projected/e0e07d73-3c76-4811-ace7-42d1efbfe29b-kube-api-access-fks7x\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:38 crc kubenswrapper[4687]: I1203 18:51:38.927357 4687 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0e07d73-3c76-4811-ace7-42d1efbfe29b-host\") on node \"crc\" DevicePath \"\"" Dec 03 18:51:39 crc kubenswrapper[4687]: I1203 18:51:39.148819 4687 scope.go:117] "RemoveContainer" containerID="41893c5643b87c897e42bb4ca98169562e8dd0227100cc2bd81495c59ff7a5e7" Dec 03 18:51:39 crc kubenswrapper[4687]: I1203 18:51:39.148860 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/crc-debug-2xjs5" Dec 03 18:51:39 crc kubenswrapper[4687]: I1203 18:51:39.418412 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e07d73-3c76-4811-ace7-42d1efbfe29b" path="/var/lib/kubelet/pods/e0e07d73-3c76-4811-ace7-42d1efbfe29b/volumes" Dec 03 18:51:51 crc kubenswrapper[4687]: I1203 18:51:51.407771 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:51:51 crc kubenswrapper[4687]: E1203 18:51:51.408724 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.339569 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f84949b66-zfm22_c37e3f72-636e-4175-a805-8b2aa8f52eca/barbican-api/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.414319 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f84949b66-zfm22_c37e3f72-636e-4175-a805-8b2aa8f52eca/barbican-api-log/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.535686 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b984dc754-pn82p_68f5675a-1ac6-475a-b0ba-b83e975e838f/barbican-keystone-listener/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.619309 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7b984dc754-pn82p_68f5675a-1ac6-475a-b0ba-b83e975e838f/barbican-keystone-listener-log/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.697849 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc58d75dc-vk2m4_6f8ac0e6-dadf-44e8-8e92-56c306da2a8e/barbican-worker/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.707689 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc58d75dc-vk2m4_6f8ac0e6-dadf-44e8-8e92-56c306da2a8e/barbican-worker-log/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.834076 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-tg8nf_6dcace96-ba84-4176-9fa0-216e86ae113b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:54 crc kubenswrapper[4687]: I1203 18:51:54.931333 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/ceilometer-central-agent/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.010080 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/ceilometer-notification-agent/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.068729 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/proxy-httpd/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.145788 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bd9cfd0-6df9-424b-b267-98e0a180a758/sg-core/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.275071 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4766b79-a447-4290-bbe9-dc10a59ced40/cinder-api/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.286970 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e4766b79-a447-4290-bbe9-dc10a59ced40/cinder-api-log/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.388709 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_05480209-7592-4ddf-a2d9-f06d4dce2c75/cinder-scheduler/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.494766 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_05480209-7592-4ddf-a2d9-f06d4dce2c75/probe/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.631098 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hqjwn_bd21b7de-e79a-45b6-a3ea-9fb73f55fea8/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.708760 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-wtmdk_d79dbe03-ec71-4fc7-8237-b3094ecb81ca/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.822049 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/init/0.log" Dec 03 18:51:55 crc kubenswrapper[4687]: I1203 18:51:55.973334 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/init/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.046457 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-dpnwg_23a0d543-20cc-4b95-9f11-12b55442b95e/dnsmasq-dns/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.051288 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-88lsw_283f8d5d-eee3-4591-b0d2-65c3cc8fa78f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.241855 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a919d81a-089d-4146-a4ee-c2db16491d11/glance-httpd/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.267052 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a919d81a-089d-4146-a4ee-c2db16491d11/glance-log/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.429249 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c3524def-b150-4d8d-9315-b4435781cf34/glance-log/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.437237 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c3524def-b150-4d8d-9315-b4435781cf34/glance-httpd/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.590240 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6968cc7b7b-57qh6_b08dc684-ab9f-41db-a259-2d06b757f3cf/horizon/0.log" Dec 03 18:51:56 crc kubenswrapper[4687]: I1203 18:51:56.734999 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mh7ls_ff93c8d7-1225-45d9-952c-f770d7ad7e33/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.008167 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6968cc7b7b-57qh6_b08dc684-ab9f-41db-a259-2d06b757f3cf/horizon-log/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.014198 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-89dpx_a9c34c4b-6990-485c-91b7-c07c7191c398/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.171064 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29413081-r8x9h_b058710c-db65-4f53-b9b7-2e279672355a/keystone-cron/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.218798 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7fc787b46b-k9z8g_42536d5c-2479-4f9f-a6ff-d3705bb42b8f/keystone-api/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.352323 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_976b9b5d-29fa-48e5-a77a-f3f5a480ad94/kube-state-metrics/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.436030 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-vbkfp_e3ca0b80-1626-411c-b15c-c66f1f18cf9e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.736557 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d44b68cb5-gzqxl_120144a6-19ba-4119-9ef7-7c70664c5e0c/neutron-httpd/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.753697 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7d44b68cb5-gzqxl_120144a6-19ba-4119-9ef7-7c70664c5e0c/neutron-api/0.log" Dec 03 18:51:57 crc kubenswrapper[4687]: I1203 18:51:57.846130 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-77nw6_7605bbe6-2e0b-4b5e-ad4f-eed72b8e5502/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:58 crc kubenswrapper[4687]: I1203 18:51:58.375304 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63033eea-9708-468e-b1e6-87e6882a5c75/nova-api-log/0.log" Dec 03 18:51:58 crc kubenswrapper[4687]: I1203 18:51:58.531295 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_be4907b0-15af-400a-8430-ee3890e80010/nova-cell0-conductor-conductor/0.log" Dec 03 18:51:58 crc kubenswrapper[4687]: I1203 18:51:58.761343 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_63033eea-9708-468e-b1e6-87e6882a5c75/nova-api-api/0.log" Dec 03 18:51:58 crc kubenswrapper[4687]: I1203 18:51:58.830785 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d6d72bd8-fd40-4856-96ee-f753ba4c170b/nova-cell1-conductor-conductor/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.056377 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-l9stg_90387c4a-7957-4b6a-983a-0608fe7a0977/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.143205 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c7b5f7c9-5d07-41ea-8c3b-3e23a3215c90/nova-cell1-novncproxy-novncproxy/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.231948 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c0ff347c-1775-431c-bc91-ed5a80ee620e/nova-metadata-log/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.485208 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/mysql-bootstrap/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.550719 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3be8282f-510f-4d0d-a98f-8aab605e3805/nova-scheduler-scheduler/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.635410 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/mysql-bootstrap/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.694677 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b00142cd-f59e-49d3-9d26-e1344598a59a/galera/0.log" Dec 03 18:51:59 crc kubenswrapper[4687]: I1203 18:51:59.857957 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/mysql-bootstrap/0.log" Dec 03 18:52:00 crc kubenswrapper[4687]: I1203 18:52:00.032250 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/mysql-bootstrap/0.log" Dec 03 18:52:00 crc kubenswrapper[4687]: I1203 18:52:00.091394 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_04732311-c8eb-4351-a564-78ce8c8e1811/galera/0.log" Dec 03 18:52:00 crc kubenswrapper[4687]: I1203 18:52:00.744580 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b2bf6226-8105-471c-8098-0786e52ab01d/openstackclient/0.log" Dec 03 18:52:00 crc kubenswrapper[4687]: I1203 18:52:00.798089 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c0ff347c-1775-431c-bc91-ed5a80ee620e/nova-metadata-metadata/0.log" Dec 03 18:52:00 crc kubenswrapper[4687]: I1203 18:52:00.871953 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-2lczs_3037eba1-1fab-4d56-a3f0-1cecb58b3f7a/ovn-controller/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.226391 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4sqs2_53de0da8-3b25-403a-9956-79082a62780b/openstack-network-exporter/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.314558 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server-init/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.482416 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server-init/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.511909 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovsdb-server/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.526617 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-gtnmq_2642fdf0-56b9-4b22-ace6-cde247a8f08e/ovs-vswitchd/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.709647 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tqzn2_cf4db291-8ad7-4e7e-8843-29e3287b05ca/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.737161 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe36f76e-b5b2-4dfe-923b-0516ea0af76f/openstack-network-exporter/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.806898 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fe36f76e-b5b2-4dfe-923b-0516ea0af76f/ovn-northd/0.log" Dec 03 18:52:01 crc kubenswrapper[4687]: I1203 18:52:01.947952 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aff56e13-4338-42bd-a378-b0d72daa296e/openstack-network-exporter/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.005051 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_aff56e13-4338-42bd-a378-b0d72daa296e/ovsdbserver-nb/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.139502 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2e41fb58-0d75-4204-85eb-7c5526d637e6/openstack-network-exporter/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.273033 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2e41fb58-0d75-4204-85eb-7c5526d637e6/ovsdbserver-sb/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.320673 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-699567968b-hhzfv_66dbaeab-7905-40ae-9e1e-3674573a1aa3/placement-api/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.459723 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-699567968b-hhzfv_66dbaeab-7905-40ae-9e1e-3674573a1aa3/placement-log/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.545160 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/setup-container/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.785161 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/setup-container/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.803357 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/setup-container/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.848175 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b31a63e3-b46e-403c-b1b4-3acd833f453f/rabbitmq/0.log" Dec 03 18:52:02 crc kubenswrapper[4687]: I1203 18:52:02.971983 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/setup-container/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.008248 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_bef36ed8-b2b0-465c-9719-c9ff963dcd2f/rabbitmq/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.080493 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-mg84h_b8e74449-f8e1-4cf8-8a93-e04ee18070e1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.164863 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-lrxk2_788d4c10-cc61-4086-8e29-6dcdf6592f4a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.348157 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-6c5k8_416ff6ab-b4d6-451c-8219-1db28ce18f92/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.491607 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-kcjpj_ba0bb298-d1f4-478c-a663-9a8e20bfdcfd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.596040 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ssfcd_cc101fd4-addb-4d63-b123-d0c54197956c/ssh-known-hosts-edpm-deployment/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.778109 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd478575-t6xjs_70063881-c779-4ed9-9258-a175b3ee15f4/proxy-server/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.855200 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7bd478575-t6xjs_70063881-c779-4ed9-9258-a175b3ee15f4/proxy-httpd/0.log" Dec 03 18:52:03 crc kubenswrapper[4687]: I1203 18:52:03.920373 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kl6nk_9f72b95f-3e3d-49b4-8bca-8d391384a077/swift-ring-rebalance/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.034284 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-auditor/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.068468 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-reaper/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.152749 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-replicator/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.266850 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-auditor/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.287045 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/account-server/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.294906 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-replicator/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.386349 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-server/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.432349 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/container-updater/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.477037 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-auditor/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.510671 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-expirer/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.629472 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-server/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.635016 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-replicator/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.686718 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/object-updater/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.750789 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/rsync/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.838695 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ab57f25f-0766-479b-ba47-e0b90c955b0d/swift-recon-cron/0.log" Dec 03 18:52:04 crc kubenswrapper[4687]: I1203 18:52:04.909744 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4c62j_0ce84a46-82bc-42a8-b645-d801d2a8edff/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:05 crc kubenswrapper[4687]: I1203 18:52:05.057013 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3c56ab4c-455a-4436-927e-3dba7e4aa0ba/tempest-tests-tempest-tests-runner/0.log" Dec 03 18:52:05 crc kubenswrapper[4687]: I1203 18:52:05.119141 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e1272d14-143a-4ce6-9b77-7fa6e7cd99f0/test-operator-logs-container/0.log" Dec 03 18:52:05 crc kubenswrapper[4687]: I1203 18:52:05.275888 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ztppq_6c19f653-0ec6-4a75-a396-dacbe41c2c2e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 03 18:52:05 crc kubenswrapper[4687]: I1203 18:52:05.407269 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:52:05 crc kubenswrapper[4687]: E1203 18:52:05.408010 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:52:15 crc kubenswrapper[4687]: I1203 18:52:15.363663 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b6b36375-980f-4c1d-8ddb-61d9565db565/memcached/0.log" Dec 03 18:52:16 crc kubenswrapper[4687]: I1203 18:52:16.408410 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:52:16 crc kubenswrapper[4687]: E1203 18:52:16.408649 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:52:29 crc kubenswrapper[4687]: I1203 18:52:29.408020 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:52:29 crc kubenswrapper[4687]: E1203 18:52:29.408977 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:52:34 crc kubenswrapper[4687]: I1203 18:52:34.790015 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zmzr6_d3d2df8d-6f3d-4f5d-afd3-cef00553188e/kube-rbac-proxy/0.log" Dec 03 18:52:34 crc kubenswrapper[4687]: I1203 18:52:34.889091 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-zmzr6_d3d2df8d-6f3d-4f5d-afd3-cef00553188e/manager/0.log" Dec 03 18:52:34 crc kubenswrapper[4687]: I1203 18:52:34.999774 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.140654 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.159543 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.191372 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.352742 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/util/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.372997 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/extract/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.381427 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c7543650ec36ee199a0d2a8906237b0052372b509758d0d810fcbc41efpdmv5_af717b0b-ea3c-4c07-b9ec-9bfcb5fb03d6/pull/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.554993 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fn5xb_f7046b74-0868-4ee1-b917-56e695a94d16/kube-rbac-proxy/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.620637 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-fn5xb_f7046b74-0868-4ee1-b917-56e695a94d16/manager/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.647512 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-6xgff_6fa88489-3c47-4369-9f87-a3f029f75a42/kube-rbac-proxy/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.749487 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-6xgff_6fa88489-3c47-4369-9f87-a3f029f75a42/manager/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.820991 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-nwzp4_496e4d0a-a886-4d53-993c-66081d8843ae/kube-rbac-proxy/0.log" Dec 03 18:52:35 crc kubenswrapper[4687]: I1203 18:52:35.918407 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-nwzp4_496e4d0a-a886-4d53-993c-66081d8843ae/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.020960 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h6x45_b63b97e0-be73-4e96-9904-9f5c030a0afb/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.021895 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-h6x45_b63b97e0-be73-4e96-9904-9f5c030a0afb/kube-rbac-proxy/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.203398 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-sdbgv_0e3acf7a-4766-4f89-9f70-d5ec2690318b/kube-rbac-proxy/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.263548 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-sdbgv_0e3acf7a-4766-4f89-9f70-d5ec2690318b/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.343591 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lx2md_6abb698e-8c6d-40c8-b87d-dcd828bba5d3/kube-rbac-proxy/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.482369 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-fcwrt_e48eab37-9bd2-4f8d-892a-4436c68bab21/kube-rbac-proxy/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.539237 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-lx2md_6abb698e-8c6d-40c8-b87d-dcd828bba5d3/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.566844 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-fcwrt_e48eab37-9bd2-4f8d-892a-4436c68bab21/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.681331 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-mzvdw_e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b/kube-rbac-proxy/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.782032 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-mzvdw_e91d23d6-7eed-4927-b5b1-3ef9d51b8d1b/manager/0.log" Dec 03 18:52:36 crc kubenswrapper[4687]: I1203 18:52:36.925870 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-hrlqq_59db1fe9-9d85-4346-8718-4e9139c8acb9/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.071696 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-hrlqq_59db1fe9-9d85-4346-8718-4e9139c8acb9/manager/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.131829 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bfwb6_491fb200-3ef9-4833-83c6-22b575b46998/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.217179 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-bfwb6_491fb200-3ef9-4833-83c6-22b575b46998/manager/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.315114 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-7ftp5_d57e7a62-6958-4e64-98e6-a22857b00e32/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.392044 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-7ftp5_d57e7a62-6958-4e64-98e6-a22857b00e32/manager/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.490164 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9pldz_379ff892-6dae-4b1b-9ae1-f6b7da9f4db6/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.609798 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9pldz_379ff892-6dae-4b1b-9ae1-f6b7da9f4db6/manager/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.619789 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xj6hg_5952221c-60d0-4159-bbd8-2adf2f1e3d8e/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.694064 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-xj6hg_5952221c-60d0-4159-bbd8-2adf2f1e3d8e/manager/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.761811 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w_58a46d42-dade-4bfe-b9b0-bddac75f1d81/kube-rbac-proxy/0.log" Dec 03 18:52:37 crc kubenswrapper[4687]: I1203 18:52:37.808692 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4ftc8w_58a46d42-dade-4bfe-b9b0-bddac75f1d81/manager/0.log" Dec 03 18:52:38 crc kubenswrapper[4687]: I1203 18:52:38.202229 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-586db6c45c-hj8pp_c70399a2-304f-40f7-9f8e-b566d290ede2/operator/0.log" Dec 03 18:52:38 crc kubenswrapper[4687]: I1203 18:52:38.235357 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5r6vj_8e1a26a4-e1d4-4d8f-a452-a86a688788f3/registry-server/0.log" Dec 03 18:52:38 crc kubenswrapper[4687]: I1203 18:52:38.422773 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xjjxv_1655eb12-9c61-4959-9886-bd6f50b95292/kube-rbac-proxy/0.log" Dec 03 18:52:38 crc kubenswrapper[4687]: I1203 18:52:38.462683 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-xjjxv_1655eb12-9c61-4959-9886-bd6f50b95292/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.083404 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-vpdn7_e0b4d539-a10d-4f94-8097-667df133713d/kube-rbac-proxy/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.124062 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65f8659594-f2bcj_a9c3ecf7-40b8-43a9-902d-0fe02be37037/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.151262 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wpfh8_9c5e71f4-be0f-4da7-8d14-bb46cc12c5b3/operator/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.179975 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-vpdn7_e0b4d539-a10d-4f94-8097-667df133713d/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.294286 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-gbkkg_6e6fc336-ee86-4c81-bbc7-76b241f4cffa/kube-rbac-proxy/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.352865 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-gbkkg_6e6fc336-ee86-4c81-bbc7-76b241f4cffa/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.395282 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vxwfl_785c9182-9230-4d64-9a16-81877ee4d03e/kube-rbac-proxy/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.570755 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vxwfl_785c9182-9230-4d64-9a16-81877ee4d03e/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.595169 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-58bfx_f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf/manager/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.607257 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-58bfx_f4e7e89d-5de2-4cc7-93e1-a8d7aecc57bf/kube-rbac-proxy/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.744520 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-xvq78_b119316e-0e6a-43d8-a5e3-0068f099fad0/kube-rbac-proxy/0.log" Dec 03 18:52:39 crc kubenswrapper[4687]: I1203 18:52:39.761545 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-xvq78_b119316e-0e6a-43d8-a5e3-0068f099fad0/manager/0.log" Dec 03 18:52:40 crc kubenswrapper[4687]: I1203 18:52:40.407713 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:52:40 crc kubenswrapper[4687]: E1203 18:52:40.408296 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:52:52 crc kubenswrapper[4687]: I1203 18:52:52.407734 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:52:52 crc kubenswrapper[4687]: E1203 18:52:52.408365 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:01 crc kubenswrapper[4687]: I1203 18:53:01.275808 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xv2xd_e248449e-8a3d-418a-8f0f-0b8484d27c39/control-plane-machine-set-operator/0.log" Dec 03 18:53:01 crc kubenswrapper[4687]: I1203 18:53:01.455852 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q8fqs_bcfb21f2-e1fe-42f0-b166-a2f50847cc6b/kube-rbac-proxy/0.log" Dec 03 18:53:01 crc kubenswrapper[4687]: I1203 18:53:01.470526 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-q8fqs_bcfb21f2-e1fe-42f0-b166-a2f50847cc6b/machine-api-operator/0.log" Dec 03 18:53:07 crc kubenswrapper[4687]: I1203 18:53:07.416332 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:53:07 crc kubenswrapper[4687]: E1203 18:53:07.417692 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:13 crc kubenswrapper[4687]: I1203 18:53:13.764677 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-f2zqs_46718bd5-eda0-473f-ba31-97f2a591fefe/cert-manager-controller/0.log" Dec 03 18:53:13 crc kubenswrapper[4687]: I1203 18:53:13.917339 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-894dz_ca5b85a2-69d2-428e-9c2a-9e1fdcff7b43/cert-manager-webhook/0.log" Dec 03 18:53:13 crc kubenswrapper[4687]: I1203 18:53:13.928268 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-pxzt9_91a9f246-dffa-4891-a4b8-91962e0bdbad/cert-manager-cainjector/0.log" Dec 03 18:53:19 crc kubenswrapper[4687]: I1203 18:53:19.407504 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:53:19 crc kubenswrapper[4687]: E1203 18:53:19.408303 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.274277 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-dkrpg_b1bd9d52-1f74-4001-a2ff-c3a84666c686/nmstate-console-plugin/0.log" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.431208 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-p2m72_9623c042-2813-4192-a1fc-a92a58364fce/nmstate-handler/0.log" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.473996 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ncxtd_34214bad-1472-4611-9876-d7765279821c/nmstate-metrics/0.log" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.483316 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-ncxtd_34214bad-1472-4611-9876-d7765279821c/kube-rbac-proxy/0.log" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.627250 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-l4nkx_d803fc3b-cbaa-4241-870a-7c89982621dd/nmstate-operator/0.log" Dec 03 18:53:26 crc kubenswrapper[4687]: I1203 18:53:26.662172 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-blkh6_b8bf00a4-e266-4c05-bfc5-4121c96f0368/nmstate-webhook/0.log" Dec 03 18:53:34 crc kubenswrapper[4687]: I1203 18:53:34.407339 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:53:34 crc kubenswrapper[4687]: E1203 18:53:34.408264 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:40 crc kubenswrapper[4687]: I1203 18:53:40.982551 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-xc95b_bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3/kube-rbac-proxy/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.077067 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-xc95b_bff2bdf6-ec54-4e9e-8d82-d5ed87643dd3/controller/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.658960 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.853039 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.860925 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.861580 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:53:41 crc kubenswrapper[4687]: I1203 18:53:41.891473 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.095748 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.112458 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.133341 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.136587 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.321478 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-reloader/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.346945 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/controller/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.347088 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-metrics/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.348092 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/cp-frr-files/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.625279 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/frr-metrics/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.655576 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/kube-rbac-proxy-frr/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.662004 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/kube-rbac-proxy/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.851616 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/reloader/0.log" Dec 03 18:53:42 crc kubenswrapper[4687]: I1203 18:53:42.863861 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-z7q2l_ba8d9037-40bd-4f5b-bd59-139f36424600/frr-k8s-webhook-server/0.log" Dec 03 18:53:43 crc kubenswrapper[4687]: I1203 18:53:43.505872 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-758fc566f8-ssxcf_20b383be-ffda-4db5-8914-c3a22cfb94ec/manager/0.log" Dec 03 18:53:43 crc kubenswrapper[4687]: I1203 18:53:43.712188 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6cfb994ff-8gwcx_3c6529b3-3b9c-4329-8ed7-05431ec4a4bf/webhook-server/0.log" Dec 03 18:53:43 crc kubenswrapper[4687]: I1203 18:53:43.778727 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzhqb_fe83569c-2e40-440d-85fc-764d28429dbf/kube-rbac-proxy/0.log" Dec 03 18:53:44 crc kubenswrapper[4687]: I1203 18:53:44.080910 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d6gdp_bda58d5c-98aa-4889-bbd8-f7336cc0aade/frr/0.log" Dec 03 18:53:44 crc kubenswrapper[4687]: I1203 18:53:44.239520 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rzhqb_fe83569c-2e40-440d-85fc-764d28429dbf/speaker/0.log" Dec 03 18:53:46 crc kubenswrapper[4687]: I1203 18:53:46.407728 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:53:46 crc kubenswrapper[4687]: E1203 18:53:46.408309 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.059362 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.239329 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.250529 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.253890 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.397151 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.401841 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/extract/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.413063 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:53:57 crc kubenswrapper[4687]: E1203 18:53:57.413381 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.419101 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fj6lhb_0de6d07d-8385-44ce-a57a-7950e1c8da08/pull/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.575515 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.714420 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.732257 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.759984 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.915161 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/util/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.927466 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/extract/0.log" Dec 03 18:53:57 crc kubenswrapper[4687]: I1203 18:53:57.950324 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83b2n58_c98b03c2-e740-402d-b2f8-d8ab27224b94/pull/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.096384 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.266051 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.270905 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.308659 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.469316 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-utilities/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.503010 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/extract-content/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.746899 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.982354 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mv6k8_6718dbad-e886-4c4c-b078-7b0ef1d4ee57/registry-server/0.log" Dec 03 18:53:58 crc kubenswrapper[4687]: I1203 18:53:58.991240 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.007846 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.046976 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.139291 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.161312 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/extract-utilities/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.336043 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6bp27_d7aa828b-8739-41ee-bdd4-81f7b5421561/marketplace-operator/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.503024 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.725274 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.744536 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.777082 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pp8q2_9b4d5812-779c-4a37-bbaf-a9812dd96d93/registry-server/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.808819 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.951630 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-content/0.log" Dec 03 18:53:59 crc kubenswrapper[4687]: I1203 18:53:59.956412 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/extract-utilities/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.113671 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pzzh6_1756ac21-d3d5-4255-ad09-3c783d85b99f/registry-server/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.196021 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.314297 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.317059 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.360768 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.514188 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-content/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.538056 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/extract-utilities/0.log" Dec 03 18:54:00 crc kubenswrapper[4687]: I1203 18:54:00.675236 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vdqwc_d8c2c83b-47e6-4b42-a034-ba86180d732c/registry-server/0.log" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.408410 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:54:10 crc kubenswrapper[4687]: E1203 18:54:10.409558 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.665353 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:10 crc kubenswrapper[4687]: E1203 18:54:10.666052 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e07d73-3c76-4811-ace7-42d1efbfe29b" containerName="container-00" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.666083 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e07d73-3c76-4811-ace7-42d1efbfe29b" containerName="container-00" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.666456 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e07d73-3c76-4811-ace7-42d1efbfe29b" containerName="container-00" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.668463 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.701482 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.750279 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvhs\" (UniqueName: \"kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.750514 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.750582 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.852918 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qvhs\" (UniqueName: \"kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.853041 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.853081 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.853608 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.853660 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:10 crc kubenswrapper[4687]: I1203 18:54:10.874548 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qvhs\" (UniqueName: \"kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs\") pod \"redhat-operators-l4ck5\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:11 crc kubenswrapper[4687]: I1203 18:54:11.005290 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:11 crc kubenswrapper[4687]: I1203 18:54:11.450842 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:11 crc kubenswrapper[4687]: I1203 18:54:11.520629 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerStarted","Data":"d1b36475b0940517560c33ddee1c221eebd8e8a801bd69cf9a5af28226f4eb76"} Dec 03 18:54:12 crc kubenswrapper[4687]: I1203 18:54:12.531863 4687 generic.go:334] "Generic (PLEG): container finished" podID="549895bb-9089-4093-b3d3-16eac2c32878" containerID="cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471" exitCode=0 Dec 03 18:54:12 crc kubenswrapper[4687]: I1203 18:54:12.531952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerDied","Data":"cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471"} Dec 03 18:54:13 crc kubenswrapper[4687]: I1203 18:54:13.541920 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerStarted","Data":"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c"} Dec 03 18:54:17 crc kubenswrapper[4687]: I1203 18:54:17.585860 4687 generic.go:334] "Generic (PLEG): container finished" podID="549895bb-9089-4093-b3d3-16eac2c32878" containerID="6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c" exitCode=0 Dec 03 18:54:17 crc kubenswrapper[4687]: I1203 18:54:17.585969 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerDied","Data":"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c"} Dec 03 18:54:19 crc kubenswrapper[4687]: I1203 18:54:19.609194 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerStarted","Data":"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2"} Dec 03 18:54:19 crc kubenswrapper[4687]: I1203 18:54:19.629872 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4ck5" podStartSLOduration=2.927130549 podStartE2EDuration="9.629858629s" podCreationTimestamp="2025-12-03 18:54:10 +0000 UTC" firstStartedPulling="2025-12-03 18:54:12.535176023 +0000 UTC m=+4485.425871496" lastFinishedPulling="2025-12-03 18:54:19.237904143 +0000 UTC m=+4492.128599576" observedRunningTime="2025-12-03 18:54:19.62728054 +0000 UTC m=+4492.517975963" watchObservedRunningTime="2025-12-03 18:54:19.629858629 +0000 UTC m=+4492.520554062" Dec 03 18:54:21 crc kubenswrapper[4687]: I1203 18:54:21.006034 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:21 crc kubenswrapper[4687]: I1203 18:54:21.006161 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:21 crc kubenswrapper[4687]: I1203 18:54:21.407481 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:54:21 crc kubenswrapper[4687]: E1203 18:54:21.408160 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:54:22 crc kubenswrapper[4687]: I1203 18:54:22.118333 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4ck5" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" probeResult="failure" output=< Dec 03 18:54:22 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:54:22 crc kubenswrapper[4687]: > Dec 03 18:54:32 crc kubenswrapper[4687]: I1203 18:54:32.057860 4687 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4ck5" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" probeResult="failure" output=< Dec 03 18:54:32 crc kubenswrapper[4687]: timeout: failed to connect service ":50051" within 1s Dec 03 18:54:32 crc kubenswrapper[4687]: > Dec 03 18:54:33 crc kubenswrapper[4687]: I1203 18:54:33.415609 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:54:33 crc kubenswrapper[4687]: E1203 18:54:33.416421 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:54:41 crc kubenswrapper[4687]: I1203 18:54:41.078796 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:41 crc kubenswrapper[4687]: I1203 18:54:41.151079 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:41 crc kubenswrapper[4687]: I1203 18:54:41.871604 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:42 crc kubenswrapper[4687]: I1203 18:54:42.824329 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4ck5" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" containerID="cri-o://420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2" gracePeriod=2 Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.382927 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.422433 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities\") pod \"549895bb-9089-4093-b3d3-16eac2c32878\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.422519 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qvhs\" (UniqueName: \"kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs\") pod \"549895bb-9089-4093-b3d3-16eac2c32878\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.422582 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content\") pod \"549895bb-9089-4093-b3d3-16eac2c32878\" (UID: \"549895bb-9089-4093-b3d3-16eac2c32878\") " Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.424658 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities" (OuterVolumeSpecName: "utilities") pod "549895bb-9089-4093-b3d3-16eac2c32878" (UID: "549895bb-9089-4093-b3d3-16eac2c32878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.430581 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs" (OuterVolumeSpecName: "kube-api-access-8qvhs") pod "549895bb-9089-4093-b3d3-16eac2c32878" (UID: "549895bb-9089-4093-b3d3-16eac2c32878"). InnerVolumeSpecName "kube-api-access-8qvhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.525459 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.525498 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qvhs\" (UniqueName: \"kubernetes.io/projected/549895bb-9089-4093-b3d3-16eac2c32878-kube-api-access-8qvhs\") on node \"crc\" DevicePath \"\"" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.574617 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "549895bb-9089-4093-b3d3-16eac2c32878" (UID: "549895bb-9089-4093-b3d3-16eac2c32878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.626344 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549895bb-9089-4093-b3d3-16eac2c32878-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.837175 4687 generic.go:334] "Generic (PLEG): container finished" podID="549895bb-9089-4093-b3d3-16eac2c32878" containerID="420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2" exitCode=0 Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.837275 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4ck5" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.837299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerDied","Data":"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2"} Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.838208 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4ck5" event={"ID":"549895bb-9089-4093-b3d3-16eac2c32878","Type":"ContainerDied","Data":"d1b36475b0940517560c33ddee1c221eebd8e8a801bd69cf9a5af28226f4eb76"} Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.838246 4687 scope.go:117] "RemoveContainer" containerID="420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.863413 4687 scope.go:117] "RemoveContainer" containerID="6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.890890 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.907965 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4ck5"] Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.914038 4687 scope.go:117] "RemoveContainer" containerID="cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.942218 4687 scope.go:117] "RemoveContainer" containerID="420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2" Dec 03 18:54:43 crc kubenswrapper[4687]: E1203 18:54:43.942753 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2\": container with ID starting with 420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2 not found: ID does not exist" containerID="420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.942853 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2"} err="failed to get container status \"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2\": rpc error: code = NotFound desc = could not find container \"420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2\": container with ID starting with 420f73de1d5fbad26016845ead93cd6fac2751eb583e0cdfb1422acd22286bf2 not found: ID does not exist" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.942901 4687 scope.go:117] "RemoveContainer" containerID="6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c" Dec 03 18:54:43 crc kubenswrapper[4687]: E1203 18:54:43.943391 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c\": container with ID starting with 6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c not found: ID does not exist" containerID="6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.943427 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c"} err="failed to get container status \"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c\": rpc error: code = NotFound desc = could not find container \"6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c\": container with ID starting with 6f0722555a4d74f947afd2f28a518e244ecacd132be617fe169ab42488ab567c not found: ID does not exist" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.943451 4687 scope.go:117] "RemoveContainer" containerID="cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471" Dec 03 18:54:43 crc kubenswrapper[4687]: E1203 18:54:43.948817 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471\": container with ID starting with cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471 not found: ID does not exist" containerID="cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471" Dec 03 18:54:43 crc kubenswrapper[4687]: I1203 18:54:43.948859 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471"} err="failed to get container status \"cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471\": rpc error: code = NotFound desc = could not find container \"cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471\": container with ID starting with cb09f7113021f4facf558bf5a7e62f1459240fc95a703afbfa77836ec3c25471 not found: ID does not exist" Dec 03 18:54:45 crc kubenswrapper[4687]: I1203 18:54:45.422517 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549895bb-9089-4093-b3d3-16eac2c32878" path="/var/lib/kubelet/pods/549895bb-9089-4093-b3d3-16eac2c32878/volumes" Dec 03 18:54:46 crc kubenswrapper[4687]: I1203 18:54:46.408711 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:54:46 crc kubenswrapper[4687]: E1203 18:54:46.409518 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:54:59 crc kubenswrapper[4687]: I1203 18:54:59.408401 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:54:59 crc kubenswrapper[4687]: E1203 18:54:59.409366 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:55:12 crc kubenswrapper[4687]: I1203 18:55:12.408405 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:55:12 crc kubenswrapper[4687]: E1203 18:55:12.409904 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:55:25 crc kubenswrapper[4687]: I1203 18:55:25.409334 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:55:25 crc kubenswrapper[4687]: E1203 18:55:25.410486 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:55:36 crc kubenswrapper[4687]: I1203 18:55:36.408368 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:55:36 crc kubenswrapper[4687]: E1203 18:55:36.409491 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:55:40 crc kubenswrapper[4687]: I1203 18:55:40.487074 4687 generic.go:334] "Generic (PLEG): container finished" podID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerID="de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b" exitCode=0 Dec 03 18:55:40 crc kubenswrapper[4687]: I1203 18:55:40.487197 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tfplk/must-gather-fzjdz" event={"ID":"e72a3ed1-15bc-4362-a4fb-bd912e4d619d","Type":"ContainerDied","Data":"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b"} Dec 03 18:55:40 crc kubenswrapper[4687]: I1203 18:55:40.488070 4687 scope.go:117] "RemoveContainer" containerID="de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b" Dec 03 18:55:41 crc kubenswrapper[4687]: I1203 18:55:41.011536 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfplk_must-gather-fzjdz_e72a3ed1-15bc-4362-a4fb-bd912e4d619d/gather/0.log" Dec 03 18:55:49 crc kubenswrapper[4687]: I1203 18:55:49.408299 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:55:49 crc kubenswrapper[4687]: E1203 18:55:49.409646 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:55:50 crc kubenswrapper[4687]: I1203 18:55:50.621114 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tfplk/must-gather-fzjdz"] Dec 03 18:55:50 crc kubenswrapper[4687]: I1203 18:55:50.621526 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tfplk/must-gather-fzjdz" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="copy" containerID="cri-o://7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e" gracePeriod=2 Dec 03 18:55:50 crc kubenswrapper[4687]: I1203 18:55:50.628901 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tfplk/must-gather-fzjdz"] Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.069254 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfplk_must-gather-fzjdz_e72a3ed1-15bc-4362-a4fb-bd912e4d619d/copy/0.log" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.069771 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.158801 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output\") pod \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.158926 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgw88\" (UniqueName: \"kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88\") pod \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\" (UID: \"e72a3ed1-15bc-4362-a4fb-bd912e4d619d\") " Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.177350 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88" (OuterVolumeSpecName: "kube-api-access-fgw88") pod "e72a3ed1-15bc-4362-a4fb-bd912e4d619d" (UID: "e72a3ed1-15bc-4362-a4fb-bd912e4d619d"). InnerVolumeSpecName "kube-api-access-fgw88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.261654 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgw88\" (UniqueName: \"kubernetes.io/projected/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-kube-api-access-fgw88\") on node \"crc\" DevicePath \"\"" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.346105 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e72a3ed1-15bc-4362-a4fb-bd912e4d619d" (UID: "e72a3ed1-15bc-4362-a4fb-bd912e4d619d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.363630 4687 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e72a3ed1-15bc-4362-a4fb-bd912e4d619d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.417449 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" path="/var/lib/kubelet/pods/e72a3ed1-15bc-4362-a4fb-bd912e4d619d/volumes" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.615627 4687 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tfplk_must-gather-fzjdz_e72a3ed1-15bc-4362-a4fb-bd912e4d619d/copy/0.log" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.615998 4687 generic.go:334] "Generic (PLEG): container finished" podID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerID="7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e" exitCode=143 Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.616034 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tfplk/must-gather-fzjdz" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.616060 4687 scope.go:117] "RemoveContainer" containerID="7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.635093 4687 scope.go:117] "RemoveContainer" containerID="de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.735368 4687 scope.go:117] "RemoveContainer" containerID="7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e" Dec 03 18:55:51 crc kubenswrapper[4687]: E1203 18:55:51.735826 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e\": container with ID starting with 7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e not found: ID does not exist" containerID="7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.735890 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e"} err="failed to get container status \"7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e\": rpc error: code = NotFound desc = could not find container \"7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e\": container with ID starting with 7eaa46180f08284795377ece23c438e1d2e8329b43f082214243ebe4fdd5c63e not found: ID does not exist" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.735932 4687 scope.go:117] "RemoveContainer" containerID="de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b" Dec 03 18:55:51 crc kubenswrapper[4687]: E1203 18:55:51.736440 4687 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b\": container with ID starting with de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b not found: ID does not exist" containerID="de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b" Dec 03 18:55:51 crc kubenswrapper[4687]: I1203 18:55:51.736479 4687 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b"} err="failed to get container status \"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b\": rpc error: code = NotFound desc = could not find container \"de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b\": container with ID starting with de3875199e30bdfda7d345ce52c8d31982451f9042bba324883772347b762a0b not found: ID does not exist" Dec 03 18:56:04 crc kubenswrapper[4687]: I1203 18:56:04.407276 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:56:04 crc kubenswrapper[4687]: E1203 18:56:04.407931 4687 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gz2wq_openshift-machine-config-operator(fab93456-303f-4c39-93a9-f52dcab12ac1)\"" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" Dec 03 18:56:17 crc kubenswrapper[4687]: I1203 18:56:17.416838 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 18:56:17 crc kubenswrapper[4687]: I1203 18:56:17.910669 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"40a1d95eb77c49f17f9105372cb4d9ea8b08cfddf4a84d85de442fd476929d55"} Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.133765 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:28 crc kubenswrapper[4687]: E1203 18:57:28.134816 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="extract-content" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.134831 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="extract-content" Dec 03 18:57:28 crc kubenswrapper[4687]: E1203 18:57:28.134847 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.134854 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" Dec 03 18:57:28 crc kubenswrapper[4687]: E1203 18:57:28.134868 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="extract-utilities" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.134875 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="extract-utilities" Dec 03 18:57:28 crc kubenswrapper[4687]: E1203 18:57:28.134909 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="copy" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.134919 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="copy" Dec 03 18:57:28 crc kubenswrapper[4687]: E1203 18:57:28.134930 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="gather" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.134936 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="gather" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.135196 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="copy" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.135216 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72a3ed1-15bc-4362-a4fb-bd912e4d619d" containerName="gather" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.135391 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="549895bb-9089-4093-b3d3-16eac2c32878" containerName="registry-server" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.137437 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.161869 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.266265 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.266535 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2l46\" (UniqueName: \"kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.266699 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.368718 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.368840 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.368915 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2l46\" (UniqueName: \"kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.373554 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.373626 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.393618 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2l46\" (UniqueName: \"kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46\") pod \"community-operators-xxkzp\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.464689 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:28 crc kubenswrapper[4687]: I1203 18:57:28.984970 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:29 crc kubenswrapper[4687]: I1203 18:57:29.662167 4687 generic.go:334] "Generic (PLEG): container finished" podID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerID="311fc8d6495140fbd1de7af0212bbb347fc75506763db4d8ada397ac2266de3e" exitCode=0 Dec 03 18:57:29 crc kubenswrapper[4687]: I1203 18:57:29.662233 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerDied","Data":"311fc8d6495140fbd1de7af0212bbb347fc75506763db4d8ada397ac2266de3e"} Dec 03 18:57:29 crc kubenswrapper[4687]: I1203 18:57:29.662299 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerStarted","Data":"8ace41835380b285b270b6329d0003df223ade42374f8ad9801044bf9a88366b"} Dec 03 18:57:29 crc kubenswrapper[4687]: I1203 18:57:29.665465 4687 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 18:57:30 crc kubenswrapper[4687]: I1203 18:57:30.674220 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerStarted","Data":"087384da468d90658c3cd62106c04e344a48c49fc5d7cceb1a3f914755bd3191"} Dec 03 18:57:31 crc kubenswrapper[4687]: I1203 18:57:31.690377 4687 generic.go:334] "Generic (PLEG): container finished" podID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerID="087384da468d90658c3cd62106c04e344a48c49fc5d7cceb1a3f914755bd3191" exitCode=0 Dec 03 18:57:31 crc kubenswrapper[4687]: I1203 18:57:31.690436 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerDied","Data":"087384da468d90658c3cd62106c04e344a48c49fc5d7cceb1a3f914755bd3191"} Dec 03 18:57:33 crc kubenswrapper[4687]: I1203 18:57:33.714684 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerStarted","Data":"4ccc8ae9edc3e2fba25d1b25a9b594b85bd9e022cb5943c89fdc4275006ed86d"} Dec 03 18:57:33 crc kubenswrapper[4687]: I1203 18:57:33.739920 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xxkzp" podStartSLOduration=2.791915915 podStartE2EDuration="5.739894055s" podCreationTimestamp="2025-12-03 18:57:28 +0000 UTC" firstStartedPulling="2025-12-03 18:57:29.665001974 +0000 UTC m=+4682.555697447" lastFinishedPulling="2025-12-03 18:57:32.612980134 +0000 UTC m=+4685.503675587" observedRunningTime="2025-12-03 18:57:33.73336473 +0000 UTC m=+4686.624060223" watchObservedRunningTime="2025-12-03 18:57:33.739894055 +0000 UTC m=+4686.630589508" Dec 03 18:57:38 crc kubenswrapper[4687]: I1203 18:57:38.465105 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:38 crc kubenswrapper[4687]: I1203 18:57:38.466268 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:38 crc kubenswrapper[4687]: I1203 18:57:38.566171 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:38 crc kubenswrapper[4687]: I1203 18:57:38.801193 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:38 crc kubenswrapper[4687]: I1203 18:57:38.846390 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:40 crc kubenswrapper[4687]: I1203 18:57:40.779648 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xxkzp" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="registry-server" containerID="cri-o://4ccc8ae9edc3e2fba25d1b25a9b594b85bd9e022cb5943c89fdc4275006ed86d" gracePeriod=2 Dec 03 18:57:41 crc kubenswrapper[4687]: I1203 18:57:41.790005 4687 generic.go:334] "Generic (PLEG): container finished" podID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerID="4ccc8ae9edc3e2fba25d1b25a9b594b85bd9e022cb5943c89fdc4275006ed86d" exitCode=0 Dec 03 18:57:41 crc kubenswrapper[4687]: I1203 18:57:41.790052 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerDied","Data":"4ccc8ae9edc3e2fba25d1b25a9b594b85bd9e022cb5943c89fdc4275006ed86d"} Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.426708 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.431271 4687 scope.go:117] "RemoveContainer" containerID="200fe29ec6b1383a7ccb58448262cd8f200fd8cef3c9d173bd3de2f4c4eb2896" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.562861 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities\") pod \"7fb8bf55-550a-4185-89f0-11a5d709cef1\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.563404 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2l46\" (UniqueName: \"kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46\") pod \"7fb8bf55-550a-4185-89f0-11a5d709cef1\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.563489 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content\") pod \"7fb8bf55-550a-4185-89f0-11a5d709cef1\" (UID: \"7fb8bf55-550a-4185-89f0-11a5d709cef1\") " Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.564360 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities" (OuterVolumeSpecName: "utilities") pod "7fb8bf55-550a-4185-89f0-11a5d709cef1" (UID: "7fb8bf55-550a-4185-89f0-11a5d709cef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.571638 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46" (OuterVolumeSpecName: "kube-api-access-x2l46") pod "7fb8bf55-550a-4185-89f0-11a5d709cef1" (UID: "7fb8bf55-550a-4185-89f0-11a5d709cef1"). InnerVolumeSpecName "kube-api-access-x2l46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.621871 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fb8bf55-550a-4185-89f0-11a5d709cef1" (UID: "7fb8bf55-550a-4185-89f0-11a5d709cef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.665831 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2l46\" (UniqueName: \"kubernetes.io/projected/7fb8bf55-550a-4185-89f0-11a5d709cef1-kube-api-access-x2l46\") on node \"crc\" DevicePath \"\"" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.665866 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.665877 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fb8bf55-550a-4185-89f0-11a5d709cef1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.804216 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xxkzp" event={"ID":"7fb8bf55-550a-4185-89f0-11a5d709cef1","Type":"ContainerDied","Data":"8ace41835380b285b270b6329d0003df223ade42374f8ad9801044bf9a88366b"} Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.804287 4687 scope.go:117] "RemoveContainer" containerID="4ccc8ae9edc3e2fba25d1b25a9b594b85bd9e022cb5943c89fdc4275006ed86d" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.804318 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xxkzp" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.827553 4687 scope.go:117] "RemoveContainer" containerID="087384da468d90658c3cd62106c04e344a48c49fc5d7cceb1a3f914755bd3191" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.857439 4687 scope.go:117] "RemoveContainer" containerID="311fc8d6495140fbd1de7af0212bbb347fc75506763db4d8ada397ac2266de3e" Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.859586 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:42 crc kubenswrapper[4687]: I1203 18:57:42.872173 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xxkzp"] Dec 03 18:57:43 crc kubenswrapper[4687]: I1203 18:57:43.420082 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" path="/var/lib/kubelet/pods/7fb8bf55-550a-4185-89f0-11a5d709cef1/volumes" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.657156 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:25 crc kubenswrapper[4687]: E1203 18:58:25.658392 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="extract-utilities" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.658414 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="extract-utilities" Dec 03 18:58:25 crc kubenswrapper[4687]: E1203 18:58:25.658444 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="extract-content" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.658459 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="extract-content" Dec 03 18:58:25 crc kubenswrapper[4687]: E1203 18:58:25.658528 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="registry-server" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.658543 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="registry-server" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.658895 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fb8bf55-550a-4185-89f0-11a5d709cef1" containerName="registry-server" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.661320 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.670037 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.746536 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.746920 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.747270 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds9lk\" (UniqueName: \"kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.849556 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.849797 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.849977 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds9lk\" (UniqueName: \"kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.850179 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.850360 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.910389 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds9lk\" (UniqueName: \"kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk\") pod \"certified-operators-ngw7z\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:25 crc kubenswrapper[4687]: I1203 18:58:25.982139 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:26 crc kubenswrapper[4687]: I1203 18:58:26.488697 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:27 crc kubenswrapper[4687]: I1203 18:58:27.262753 4687 generic.go:334] "Generic (PLEG): container finished" podID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerID="cd6496b86b4639944638665a0df5eb5f3445f62641583e4c58090472af96763f" exitCode=0 Dec 03 18:58:27 crc kubenswrapper[4687]: I1203 18:58:27.262851 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerDied","Data":"cd6496b86b4639944638665a0df5eb5f3445f62641583e4c58090472af96763f"} Dec 03 18:58:27 crc kubenswrapper[4687]: I1203 18:58:27.263085 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerStarted","Data":"ab5cbed7c4369e0aad3c4b810ad41c242af63dc08a8fd093efa06002a6b84349"} Dec 03 18:58:28 crc kubenswrapper[4687]: I1203 18:58:28.277998 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerStarted","Data":"a851fc09933028cb4c4c9ae9686eee64790ca73cb6486b3b3723818a1d2fe5cd"} Dec 03 18:58:29 crc kubenswrapper[4687]: I1203 18:58:29.290204 4687 generic.go:334] "Generic (PLEG): container finished" podID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerID="a851fc09933028cb4c4c9ae9686eee64790ca73cb6486b3b3723818a1d2fe5cd" exitCode=0 Dec 03 18:58:29 crc kubenswrapper[4687]: I1203 18:58:29.290314 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerDied","Data":"a851fc09933028cb4c4c9ae9686eee64790ca73cb6486b3b3723818a1d2fe5cd"} Dec 03 18:58:30 crc kubenswrapper[4687]: I1203 18:58:30.301424 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerStarted","Data":"e3782683cf43ba31fa0ad60f1c6f271932bb2cdee98cd66cdf14c1c7153f3f5d"} Dec 03 18:58:30 crc kubenswrapper[4687]: I1203 18:58:30.341099 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ngw7z" podStartSLOduration=2.932382777 podStartE2EDuration="5.341069252s" podCreationTimestamp="2025-12-03 18:58:25 +0000 UTC" firstStartedPulling="2025-12-03 18:58:27.267931956 +0000 UTC m=+4740.158627409" lastFinishedPulling="2025-12-03 18:58:29.676618441 +0000 UTC m=+4742.567313884" observedRunningTime="2025-12-03 18:58:30.334893326 +0000 UTC m=+4743.225588779" watchObservedRunningTime="2025-12-03 18:58:30.341069252 +0000 UTC m=+4743.231764715" Dec 03 18:58:35 crc kubenswrapper[4687]: I1203 18:58:35.983219 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:35 crc kubenswrapper[4687]: I1203 18:58:35.983877 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:36 crc kubenswrapper[4687]: I1203 18:58:36.044034 4687 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:36 crc kubenswrapper[4687]: I1203 18:58:36.415132 4687 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:36 crc kubenswrapper[4687]: I1203 18:58:36.463116 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:38 crc kubenswrapper[4687]: I1203 18:58:38.396730 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ngw7z" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="registry-server" containerID="cri-o://e3782683cf43ba31fa0ad60f1c6f271932bb2cdee98cd66cdf14c1c7153f3f5d" gracePeriod=2 Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.414815 4687 generic.go:334] "Generic (PLEG): container finished" podID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerID="e3782683cf43ba31fa0ad60f1c6f271932bb2cdee98cd66cdf14c1c7153f3f5d" exitCode=0 Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.429613 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerDied","Data":"e3782683cf43ba31fa0ad60f1c6f271932bb2cdee98cd66cdf14c1c7153f3f5d"} Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.574908 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.661698 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities\") pod \"345dd333-2528-4723-b4f2-fc9315cff8d1\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.662066 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content\") pod \"345dd333-2528-4723-b4f2-fc9315cff8d1\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.662108 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds9lk\" (UniqueName: \"kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk\") pod \"345dd333-2528-4723-b4f2-fc9315cff8d1\" (UID: \"345dd333-2528-4723-b4f2-fc9315cff8d1\") " Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.662728 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities" (OuterVolumeSpecName: "utilities") pod "345dd333-2528-4723-b4f2-fc9315cff8d1" (UID: "345dd333-2528-4723-b4f2-fc9315cff8d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.663307 4687 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-utilities\") on node \"crc\" DevicePath \"\"" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.674913 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk" (OuterVolumeSpecName: "kube-api-access-ds9lk") pod "345dd333-2528-4723-b4f2-fc9315cff8d1" (UID: "345dd333-2528-4723-b4f2-fc9315cff8d1"). InnerVolumeSpecName "kube-api-access-ds9lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.738064 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "345dd333-2528-4723-b4f2-fc9315cff8d1" (UID: "345dd333-2528-4723-b4f2-fc9315cff8d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.765059 4687 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345dd333-2528-4723-b4f2-fc9315cff8d1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 03 18:58:39 crc kubenswrapper[4687]: I1203 18:58:39.765085 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds9lk\" (UniqueName: \"kubernetes.io/projected/345dd333-2528-4723-b4f2-fc9315cff8d1-kube-api-access-ds9lk\") on node \"crc\" DevicePath \"\"" Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.427678 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngw7z" event={"ID":"345dd333-2528-4723-b4f2-fc9315cff8d1","Type":"ContainerDied","Data":"ab5cbed7c4369e0aad3c4b810ad41c242af63dc08a8fd093efa06002a6b84349"} Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.427753 4687 scope.go:117] "RemoveContainer" containerID="e3782683cf43ba31fa0ad60f1c6f271932bb2cdee98cd66cdf14c1c7153f3f5d" Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.427811 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngw7z" Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.462407 4687 scope.go:117] "RemoveContainer" containerID="a851fc09933028cb4c4c9ae9686eee64790ca73cb6486b3b3723818a1d2fe5cd" Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.490180 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.501974 4687 scope.go:117] "RemoveContainer" containerID="cd6496b86b4639944638665a0df5eb5f3445f62641583e4c58090472af96763f" Dec 03 18:58:40 crc kubenswrapper[4687]: I1203 18:58:40.503613 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ngw7z"] Dec 03 18:58:41 crc kubenswrapper[4687]: I1203 18:58:41.422157 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" path="/var/lib/kubelet/pods/345dd333-2528-4723-b4f2-fc9315cff8d1/volumes" Dec 03 18:58:44 crc kubenswrapper[4687]: I1203 18:58:44.112367 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:58:44 crc kubenswrapper[4687]: I1203 18:58:44.113013 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:59:14 crc kubenswrapper[4687]: I1203 18:59:14.115565 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:59:14 crc kubenswrapper[4687]: I1203 18:59:14.118335 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:59:44 crc kubenswrapper[4687]: I1203 18:59:44.111615 4687 patch_prober.go:28] interesting pod/machine-config-daemon-gz2wq container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 18:59:44 crc kubenswrapper[4687]: I1203 18:59:44.112298 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 18:59:44 crc kubenswrapper[4687]: I1203 18:59:44.112358 4687 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" Dec 03 18:59:44 crc kubenswrapper[4687]: I1203 18:59:44.113477 4687 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40a1d95eb77c49f17f9105372cb4d9ea8b08cfddf4a84d85de442fd476929d55"} pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 18:59:44 crc kubenswrapper[4687]: I1203 18:59:44.113579 4687 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" podUID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerName="machine-config-daemon" containerID="cri-o://40a1d95eb77c49f17f9105372cb4d9ea8b08cfddf4a84d85de442fd476929d55" gracePeriod=600 Dec 03 18:59:45 crc kubenswrapper[4687]: I1203 18:59:45.143323 4687 generic.go:334] "Generic (PLEG): container finished" podID="fab93456-303f-4c39-93a9-f52dcab12ac1" containerID="40a1d95eb77c49f17f9105372cb4d9ea8b08cfddf4a84d85de442fd476929d55" exitCode=0 Dec 03 18:59:45 crc kubenswrapper[4687]: I1203 18:59:45.143403 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerDied","Data":"40a1d95eb77c49f17f9105372cb4d9ea8b08cfddf4a84d85de442fd476929d55"} Dec 03 18:59:45 crc kubenswrapper[4687]: I1203 18:59:45.143952 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gz2wq" event={"ID":"fab93456-303f-4c39-93a9-f52dcab12ac1","Type":"ContainerStarted","Data":"ea2c14177cc06bc8fd7324f3f682322d31cc6a1d9b50c3b3ef9e193f501f7880"} Dec 03 18:59:45 crc kubenswrapper[4687]: I1203 18:59:45.143991 4687 scope.go:117] "RemoveContainer" containerID="28d5e0e99939113be32c713db35d718ac9e4f0f51c01978eb5b484577ee3dd5f" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.157637 4687 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4"] Dec 03 19:00:00 crc kubenswrapper[4687]: E1203 19:00:00.158860 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="extract-content" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.158880 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="extract-content" Dec 03 19:00:00 crc kubenswrapper[4687]: E1203 19:00:00.158927 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="extract-utilities" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.158936 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="extract-utilities" Dec 03 19:00:00 crc kubenswrapper[4687]: E1203 19:00:00.158948 4687 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="registry-server" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.158957 4687 state_mem.go:107] "Deleted CPUSet assignment" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="registry-server" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.159234 4687 memory_manager.go:354] "RemoveStaleState removing state" podUID="345dd333-2528-4723-b4f2-fc9315cff8d1" containerName="registry-server" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.159974 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.163074 4687 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.163352 4687 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.172114 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4"] Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.230758 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.231054 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.231319 4687 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xr4t\" (UniqueName: \"kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.332627 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xr4t\" (UniqueName: \"kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.337036 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.337205 4687 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.338295 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.346641 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.353686 4687 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xr4t\" (UniqueName: \"kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t\") pod \"collect-profiles-29413140-m6tg4\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.491297 4687 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:00 crc kubenswrapper[4687]: I1203 19:00:00.957280 4687 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4"] Dec 03 19:00:00 crc kubenswrapper[4687]: W1203 19:00:00.962408 4687 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f87ac2c_4600_42cb_97c1_269618760c26.slice/crio-a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381 WatchSource:0}: Error finding container a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381: Status 404 returned error can't find the container with id a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381 Dec 03 19:00:01 crc kubenswrapper[4687]: I1203 19:00:01.344000 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" event={"ID":"7f87ac2c-4600-42cb-97c1-269618760c26","Type":"ContainerStarted","Data":"965466d931ad576b98dc870a17d2f13a1f519045a96487cadd1093020bb59ec9"} Dec 03 19:00:01 crc kubenswrapper[4687]: I1203 19:00:01.344260 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" event={"ID":"7f87ac2c-4600-42cb-97c1-269618760c26","Type":"ContainerStarted","Data":"a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381"} Dec 03 19:00:01 crc kubenswrapper[4687]: I1203 19:00:01.372904 4687 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" podStartSLOduration=1.3728885 podStartE2EDuration="1.3728885s" podCreationTimestamp="2025-12-03 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 19:00:01.366254051 +0000 UTC m=+4834.256949484" watchObservedRunningTime="2025-12-03 19:00:01.3728885 +0000 UTC m=+4834.263583933" Dec 03 19:00:02 crc kubenswrapper[4687]: I1203 19:00:02.364175 4687 generic.go:334] "Generic (PLEG): container finished" podID="7f87ac2c-4600-42cb-97c1-269618760c26" containerID="965466d931ad576b98dc870a17d2f13a1f519045a96487cadd1093020bb59ec9" exitCode=0 Dec 03 19:00:02 crc kubenswrapper[4687]: I1203 19:00:02.364413 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" event={"ID":"7f87ac2c-4600-42cb-97c1-269618760c26","Type":"ContainerDied","Data":"965466d931ad576b98dc870a17d2f13a1f519045a96487cadd1093020bb59ec9"} Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.742812 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.821408 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xr4t\" (UniqueName: \"kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t\") pod \"7f87ac2c-4600-42cb-97c1-269618760c26\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.821538 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume\") pod \"7f87ac2c-4600-42cb-97c1-269618760c26\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.821619 4687 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume\") pod \"7f87ac2c-4600-42cb-97c1-269618760c26\" (UID: \"7f87ac2c-4600-42cb-97c1-269618760c26\") " Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.822987 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f87ac2c-4600-42cb-97c1-269618760c26" (UID: "7f87ac2c-4600-42cb-97c1-269618760c26"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.833428 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t" (OuterVolumeSpecName: "kube-api-access-9xr4t") pod "7f87ac2c-4600-42cb-97c1-269618760c26" (UID: "7f87ac2c-4600-42cb-97c1-269618760c26"). InnerVolumeSpecName "kube-api-access-9xr4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.838487 4687 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f87ac2c-4600-42cb-97c1-269618760c26" (UID: "7f87ac2c-4600-42cb-97c1-269618760c26"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.923656 4687 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xr4t\" (UniqueName: \"kubernetes.io/projected/7f87ac2c-4600-42cb-97c1-269618760c26-kube-api-access-9xr4t\") on node \"crc\" DevicePath \"\"" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.924389 4687 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f87ac2c-4600-42cb-97c1-269618760c26-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 03 19:00:03 crc kubenswrapper[4687]: I1203 19:00:03.924519 4687 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f87ac2c-4600-42cb-97c1-269618760c26-config-volume\") on node \"crc\" DevicePath \"\"" Dec 03 19:00:04 crc kubenswrapper[4687]: I1203 19:00:04.393435 4687 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" event={"ID":"7f87ac2c-4600-42cb-97c1-269618760c26","Type":"ContainerDied","Data":"a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381"} Dec 03 19:00:04 crc kubenswrapper[4687]: I1203 19:00:04.393482 4687 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d6c72f1c6b0b2b2a91f2484ca7f2aff311bebfbef292e2b07314224a6e3381" Dec 03 19:00:04 crc kubenswrapper[4687]: I1203 19:00:04.393544 4687 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29413140-m6tg4" Dec 03 19:00:04 crc kubenswrapper[4687]: I1203 19:00:04.457514 4687 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq"] Dec 03 19:00:04 crc kubenswrapper[4687]: I1203 19:00:04.465416 4687 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29413095-w5lzq"] Dec 03 19:00:05 crc kubenswrapper[4687]: I1203 19:00:05.423778 4687 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de363e74-3a08-4bba-b12c-4cbeeffad444" path="/var/lib/kubelet/pods/de363e74-3a08-4bba-b12c-4cbeeffad444/volumes" Dec 03 19:00:11 crc kubenswrapper[4687]: I1203 19:00:11.778511 4687 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="04732311-c8eb-4351-a564-78ce8c8e1811" containerName="galera" probeResult="failure" output="command timed out" Dec 03 19:00:11 crc kubenswrapper[4687]: I1203 19:00:11.778978 4687 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="04732311-c8eb-4351-a564-78ce8c8e1811" containerName="galera" probeResult="failure" output="command timed out"